Posted by SerCe 9/13/2025
The fact is, despite Oracle being a menace to the tech industry, Java under their watch is thriving. Which is weird, because I don't know anyone who gives them money for Java. I'm genuinely curious who these companies are and what their incentives are!
But nooo, Java thrives and flourish under Oracle protection.
Most of the hate comes from the overly complicated "enterprise design patterns" crap that took over the ecosystem in the late 90s into the 2000s, not the language itself. It's quite possible to write clean, clear, appropriately complex, well performing Java code.
On the plus side, of all the languages I've used Java is one of the absolute best when it comes to long term maintainability of code. This is why it's used so heavily in large enterprises with long-lived business critical code bases. Being the "COBOL of the 1990s/2000s" is not an insult, and as a language it is far superior to COBOL in every way. It's not a bad language to program in at all, while COBOL will make you hate your life.
It's also a safe language unless you break out of the JVM with JNI. It's the first safe language to get huge deployment if you don't count scripting languages. Safe doesn't mean you can't have security bugs of course, it just means you're not likely to have certain kinds of security bugs and stability problems like memory errors.
The JVM is really a fantastic piece of engineering and IMHO represents a whole direction in computing I feel sad that we didn't take. We opted to stay close to the metal with all the security, portability, code reuse, and other headaches that entails, instead of going into managed execution environments that make all kinds of compatibility and reuse and portability problems mostly go away.
The biggest current knock against Java I see is JNI, which unlike the core language is absolutely horrible. The second biggest knock is that the JVM is still kind of a memory pig. CPU performance is great, sometimes converging with C or Rust performance depending on work load, but it still hogs RAM.
Then you'd be happy to learn that it's been superseded by FFM: https://openjdk.org/jeps/454 (not in all situations, but in almost all).
> The second biggest knock is that the JVM is still kind of a memory pig
I would strongly recommend watching this keynote from this year's ISMM (International Symposium on Memory Management) on this very subject: https://www.youtube.com/watch?v=mLNFVNXbw7I
The long and short of it is that (and I'm oversimplifying the talk, of course) if you use less than 1GB of RAM per CPU core, then you're likely trading off CPU for RAM in a way that's detrimental, i.e. you're wasting a valuable resource (CPU) to save a resource (RAM) that you can't put to good use (because the amount of work you can do on a machine is determined by the first of these resources to be exhausted, so you should use them in the ratio they're provided by the hardware). Refcounting collectors and even manual memory management (unless you're using arenas almost exclusively) optimise for memory footprint at the expense of CPU. Put another way, the JVM takes advantage of the more plentiful RAM to save on the more costly CPU.
JNI was only ever designed to be good enough, and it is. The new FFM API aims to replace JNI in most cases, but it's designed to be "perfect". As a result, the new API took many years to develop, but JNI was quick to develop.
It would be nice to have the FFM API much sooner, but alternatives like JNR and JNA have been around for a long time. There wasn't a rush to develop a JNI replacement.
In my post, I’m referring to Java’s refusal to adopt “properties” (i.e. methods invoked using the same syntax used for fields) like VB, C#, JavaScript, Swift, et cetera.
That said, there is a shitload of "enterprise" fuckery in Java, but those Devs would have made a mess of any codebase anyway.
To build an average Java software, you have to install a specific version of JDK, download a specific build system (Ant, Maven, Gradle, Bazel), hope everything works out on the first try - and if not, debug the most-likely-XML spec file searching for invalid dependency that's printed out on the 1000-line error output...
What Java is desperately missing is something like Python's `uv`.
---
Sibling comment mentioned that debugging Java itself is also a nightmare, which reminds me of the many Spring Boot projects I've had to debug. Stack traces containing 90% of boilerplate code layers, and annotations throwing me from one corner of the codebase to another like a ragdoll...
Admittedly, that's not inherently the problem of Java, but rather the problem of Spring. However, Spring is very likely to be found in enterprise projects.
> I’ve worked in Go codebases where it’s not simply “go build”.
A rather funny statement that says the opposite of what you intended. That you can expect most Go projects to be built with just `go build` is high praise.
build.gradle.kts
java {
toolchain {
languageVersion = JavaLanguageVersion.of(17)
//bonus: pick your favorite distribution with vendor = JvmVendorSpec.<X>
}
}
Oh yes they will.> It won’t even tell you are building with wrong version.
Right, "Class file has wrong version" doesn't explicitly tell you it's the wrong JDK. Gradle itself runs from a JDK8, so even the install you made with your Windows XP will work fine.
If your last experience with Java was 20 years ago and you think that for some reason it hasn't kept up with modern advancement and leapfrogged most languages (how's async doing in rust? virtual threads still stuck nowhere? cool.), sure, keep telling yourself that. You could at least keep it to yourself out of politeness though, or at the very least check that what you're saying is accurate.
I managed to build Docker Daemon - one of the most widely used and complex Go projects - from source with a simple `go build`.
I've never figured out how to build Jenkins from source.
Do you know of any widely used Java project that has a simple build process? Maybe a positive anecdote could change my mind.
to build a modern java project with gradle, you need _any_ jvm installed on your pc. you execute a task via the gradle wrapper (which is committed alongside the code) and it will download and invoke the pinned version of gradle, which then downloads the configured java toolchain (version, vendor, etc.) if it can't find it on your machine.
it just works.
That's the thing - it "just works" if you're on a "modern" Java project, if it uses Gradle, and if it uses it properly. Most of the Java projects I've had pleasure on working profesionally were not on that standard of quality.
You may argue that it's up to developers to keep build system simple, but in that case C++ tooling also "just works" because you can build a modern C++ project that uses CMake in two commands.
Good tooling prevents the project build process from becoming an undocumented Rube Goldberg's machine.
Maven is mostly smooth sailing comparing to Python's env solutions or JS ecosystem. Maven is 21 years old. A quick search says Python has/had; pip, venv, pip-tools, Pipenv, Poetry, PDM, pyenv, pipx, uv, Conda, Mamba, Pixi.
Debugging is just fine. Modern debugging tools are there. There is remote debugging, (although limited) update & continue, evaluating custom expressions etc. I don't know what they complain about. If using Clojure, it is also possible to change running application completely.
Monitoring tools are also great. It is easy to gather runtime metrics for profiling or monitoring events. There are tools like Mission Control to analyse them.
Java may have a long history of compatibility issues between the compiler, tooling and libraries, but it's all reasonably delimited to the Java language, so, if anything, a `nvm` equivalent such as sdkman.io should suffice.
JBang exist and (if I'm not mistaken) predates uv. See jbang.dev
Sun was aided by Oracle and IBM into the whole Java push during its early days.
Many apparently aren't aware of their roles into the whole Java history.
Oracle was the first RDMS with Java drivers, all GUI tooling rewriten into Java, JVM into the database, JSF famework, and acquisition of BEA and its JIT technology.
Also collaboration with Sun in the Network Computer thin client with its Java based OS.
The pendulum is shifting on HN. We can finally give credit to Oracle and Facebook.
Anyway, if you don't want to buy a support service, either from Oracle or any of the other companies that sell it, the use of the JDK is free. There is no "enterprise" flavour of the JDK, paid features, or use restrictions as there used to be under Sun's management. Java is obviously freer now - as in beer or in speech - than it was 20 years ago.
While everything you say sounds true, it’s not free - it’s gunpoint, it’s a lie, and if you’re big enough Oracle will come after you for subscription fees.
RedHat does the same crap. Heaven forbid you run RHEL on RHEL in containers, you’re gonna get fleeced.
You can run unlimited RHEL containers on a subscribed RHEL system. It's even set up where if you run a UBI container (a redistributable subset of RHEL content) on a subscribed RHEL system it automatically upgrades to full RHEL.
The idea there is that it's cheaper for companies with legacy software that isn't actively maintained to pay for some portion of the performance improvements in modern JVM generations than to ramp up maintenance to upgrade to modern Java, and this can help fund the continued evolution of OpenJDK.
IBM kind of thought of it, but ended up withdrawing the offer.
So the anti-Oracle folks would have seen Java wither and die in version 6, and the MaximeVM technology would never had been released as GraalVM.
No pre-core fee needed.
c10k was coined in the last millenium when a brand new PC would have 128 MB of memory and a single core 400 MHz cpu. And people were doing it with async IO, not threads back then. (Around the same time Java people got interested in Volanomark which is a similar thing but with threads - since Java didn't even have nonblocking IO then).
See eg this about 100k+ threads on Linux in 2002: https://lkml.iu.edu/hypermail/linux/kernel/0209.2/1153.html .. which mostly concerns itself with conserving memory address space since they are dealing with the 32-bit 4GB limitation of decades past.
(c10k was also about OS TCP stack limitations that were soon fixed)
It's obsolete not just because of new hardware but because we got better ergonomics for these new programming styles.
https://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-...
Another thing is what those lightweight threads are doing? If they play with CPU that's ok, you pay GC penalty and that's all. But if they access limited resources (database, another HTTP service), etc. in real application you face the standard issue: you cannot hit the targeted system with any data you want, this external system will backfire, sooner or later.
The good thing in reactive programming is that it does not try to pretend that above problem does not exist. It forces to handle errors, to handle backpressure, as those problems will not magically disappear when we switch to green threads, lightweight threads, etc. There is no free lunch here, network has its restrictions, databases has to write do disk eventually, and so on.
The focus on "100k threads" and GC overhead is a red herring. The real win isn't spawning a massive number of threads, but automatically yielding on network I/O, like e.g. goroutines do. In an I/O bound web application, you'd have a single virtual thread handling the whole request, just like a goroutine does. The GC overhead caused by the virtual thread is minuscule compared to the heap allocations caused by everything else going on in the request. If you really have a scenario for 100k virtual threads, they would not be short lived.
> But if they access limited resources (database, another HTTP service), etc. in real application you face the standard issue: you cannot hit the targeted system with any data you want
Then why would you do it? That sounds like an architectural problem, not a virtual thread problem. In an actor system, for example, you wouldn't hit the database directly from 100k different actors.
> The good thing in reactive programming is that it does not try to pretend that above problem does not exist.
This compares a high-level programming paradigm, complete with its own libraries and frameworks, to a single, low-level concurrency construct. The former is a layer of abstraction that hides complexity, while the latter is a fundamental building block that, by design, does not and cannot hide anything.
> It forces to handle errors, to handle backpressure, as those problems will not magically disappear when we switch to green threads, lightweight threads, etc.
Synchronous code handles errors in the most time-tested and understandable way there is. It is easy to reason about and easy to debug. Reactive programming requires explicit backpressure handling because its asynchronous nature creates the problem in the first place. The simplest form of "backpressure" in synchronous code with a limited amount of threads is the act of blocking. For anything more than that, there are the classic tools (blocking queues, semaphores...) or higher-level libraries built on top of them.
This is of course what normal OS threads do as well, they get suspended when blocking on IO. Which is why 100k OS threads doing IO works fine too.
[1] https://inside.java/u/BrianGoetz/
More detail here: https://github.com/dvyukov/perf-load. We recently implemented the same idea without requiring context-switch events: https://github.com/google/highway/blob/master/hwy/profiler.h...
For CPU tracing, with no sampling errors, use Apple’s M4 with the latest Xcode’s Instruments.
Xcode doesn't know about Java's internals, so it doesn't know about Java frames, although it can help with native traces.
Im not sure why anyone would improve this language when the developers are so frankly pathetic. Let them use 1.8. The ecosystem has improved. Devs have gotten worse.
Java has become a land of opportunists just trying to pass of their non-Java maybe non-programmer skills
I’m running an experiment.
A few days ago I flagged a piece someone else had written with ai. It has a specific cadence and some typical patterns. But many people seemed to buy it before I commented. I was surprised.
Today I pushed the boundary further and it clearly was that boundary.
Check my comment history.
I started out just saying “rephrase this so it sounds tighter” and moved recently towards just jotting rough notes and saying “make an HN comment out of this” and then editing.
I’ve been using gpt-5. I was going to see how Claude sonnet 4 performs at coming across as human-written / flagging some spidey senses.
(This was all by hand.)