Posted by todsacerdoti 14 hours ago
TINFOIL: Sometimes I always wondered if Azure or AWS used bots to push site traffic hits to generate money... they know you are hosted with them.. They have your info.. Send out bots to drive micro accumulation. Slow boil..
GCE is rare in my experience. Most bots I see are on AWS. The DDOS-adjacent hyper aggressive bots that try random URLs and scan for exploits tend to be on Azure or use VPNs.
AWS is bad when you report malicious traffic. Azure has been completely unresponsive and didn't react, even for C&C servers.
Do nothing, win.
They are the primary benefactors buying this data since they are the largest AI players.
I suspect that this goes for many different SDKs. Personally, I am really, really sick of hearing "That's a solved problem!", whenever I mention that I tend to "roll my own," as opposed to including some dependency, recommended by some jargon-addled dependency addict.
Bad actors love the dependency addiction of modern developers, and have learned to set some pretty clever traps.
Brings a new meaning to dependency injection.
My personal beef is that most of the time it acts like hidden global dependencies, and the configuration of those dependencies, along with their lifetimes, becomes harder to understand by not being traceable in the source code.
It's equivalent to partial application.
An uninstantiated class that follows the dependency injection pattern is equivalent to a family of functions with N+Mk arguments, where Mk is the number of parameters in method k.
Upon instantiation by passing constructor arguments, you've created a family of functions each with a distinct sets of Mk parameters, and N arguments in common.
That's the best way to think of it fundamentally. But the main implication of that which is at some point something has to know how to resolve those dependencies - i.e. they can't just be constructed and then injected from magic land. So global cradles/resolvers/containers/injectors/providers (depending on your language and framework) are also typically part and parcel of DI, and that can have some big implications on the structure of your code that some people don't like. Also you can inject functions and methods not just constructors.
This is all well and good, but you also need a bunch of code that handles resolving those dependencies, which oftentimes ends up being complex and hard to debug and will also cause runtime errors instead of compile time errors, which I find to be more or less unacceptable.
Edit: to elaborate on this, I’ve seen DI frameworks not be used in “enterprise” projects a grand total of zero times. I’ve done DI directly in personal projects and it was fine, but in most cases you don’t get to make that choice.
Just last week, when working on a Java project that’s been around for a decade or so, there were issues after migrating it from Spring to Spring Boot - when compiled through the IDE and with the configuration to allow lazy dependency resolution it would work (too many circular dependencies to change the code instead), but when built within a container by Maven that same exact code and configuration would no longer work and injection would fail.
I’m hoping it’s not one of those weird JDK platform bugs but rather an issue with how the codebase is compiled during the container image build, but the issue is mind boggling. More fun, if you take the .jar that’s built in the IDE and put it in the container, then everything works, otherwise it doesn’t. No compilation warnings, most of the startup is fine, but if you build it in the container, you get a DI runtime error about no lazy resolution being enabled even if you hardcode the setting to be on in Java code: https://docs.spring.io/spring-boot/api/kotlin/spring-boot-pr...
I’ve also seen similar issues before containers, where locally it would run on Jetty and use Tomcat on server environments, leading to everything compiling and working locally but throwing injection errors on the server.
What’s more, it’s not like you can (easily) put a breakpoint on whatever is trying to inject the dependencies - after years of Java and Spring I grow more and more convinced that anything that doesn’t generate code that you can inspect directly (e.g. how you can look at a generated MapStruct mapper implementation) is somewhat user hostile and will complicate things. At least modern Spring Boot is good in that more of the configuration is just code, because otherwise good luck debugging why some XML configuration is acting weird.
In other words, DI can make things more messy due to a bunch of technical factors around how it’s implemented (also good luck reading those stack traces), albeit even in the case of Java something like Dagger feels more sane https://dagger.dev/ despite never really catching on.
Of course, one could say that circular dependencies or configuration issues are project specific, but given enough time and projects you will almost inevitably get those sorts of headaches. So while the theory of DI is nice, you can’t just have the theory without practice.
To me it‘s rather anti-functional. Normally, when you instantiate a class, the resulting object’s behavior only depends on the constructor arguments you pass it (= the behavior is purely a function of the arguments). With dependency injection, the object’s behavior may depend on some hidden configuration, and not even inspecting the class’ source code will be able to tell you the source of that bevavior, because there’s only an @Inject annotation without any further information.
Conversely, when you modify the configuration of which implementation gets injected for which interface type, you potentially modify the behavior of many places in the code (including, potentially, the behavior of dependencies your project may have), without having passed that code any arguments to that effect. A function executing that code suddenly behaves differently, without any indication of that difference at the call site, or traceable from the call site. That’s the opposite of the functional paradigm.
It sounds like you have a gripe with a particular DI framework and not the idea of Dependency Injection. Because
> Normally, when you instantiate a class, the resulting object’s behavior only depends on the constructor arguments you pass it (= the behavior is purely a function of the arguments)
With Dependency Injection this is generally still true, even more so than normal because you're making the constructor's dependencies explicit in the arguments. If you have a class CriticalErrorLogger(), you can't directly tell where it logs to, is it using a flat file or stdout or a network logger? If you instead have a class CriticalErrorLogger(logger *io.writer), then when you create it you know exactly what it's using to log because you had to instantiate it and pass it in.
Or like Kortilla said, instead of passing in a class or struct you can pass in a function, so using the same example, something like CriticalErrorLogger(fn write)
My issue with that is this: From the point of view of the code accessing the injected value (and from the point of view of that code's callers), the value appears like out of thin air. There is no way to trace back from that code where the value came from. Similarly, when defining which value will be injected, it can be difficult to trace all the places where it will be injected.
In addition, there are often lifetime issues involved, when the injected value is itself a stateful object, or may indirectly depend on mutable, cached, or lazy-initialized, possibly external state. The time when the value's internal state is initialized or modified, or whether or not it is shared between separate injection points, is something that can't be deduced from the source code containing the injection points, but is often relevant for behavior, error handling, and general reasoning about the code.
All of this makes it more difficult to reason about the injected values, and about the code whose behavior will depend on those values, from looking at the source code.
I agree with your definition except for this part, you don't need any framework to do dependency injection. It's simply the idea that instead of having an abstract base class CriticalErrorLogger, with the concrete implementations of StdOutCriticalErrorLogger, FileCriticalErrorLogger, AwsCloudwatchCriticalErrorLogger which bake their dependency into the class design; you instead have a concrete class CriticalErrorLogger(dep *dependency) and create dependency objects externally that implement identical interfaces in different ways. You do text formatting, generating a traceback, etc, and then call dep.write(myFormattedLogString), and the dependency handles whatever that means.
I agree with you that most DI frameworks are too clever and hide too much, and some forms of DI like setter injection and reflection based injection are instant spaghetti code generators. But things like Constructor Injection or Method Injection are so simple they often feel obvious and not like Dependency Injection even though they are. I love DI, but I hate DI frameworks; I've never seen a benefit except for retrofitting legacy code with DI.
And yeah it does add the issue or lifetime management. That's an easy place to F things up in your code using DI and requires careful thought in some circumstances. I can't argue against that.
But DI doesn't need frameworks or magic methods or attributes to work. And there's a lot of situations where DI reduces code duplication, makes refactoring and testing easier, and actually makes code feel less magical than using internal dependencies.
The basic principle is much simpler than most DI frameworks make it seem. Instead of initializing a dependency internally, receive the dependency in some way. It can be through overly abstracted layers or magic methods, but it can also be as simple as adding an argument to the constructor or a given method that takes a reference to the dependency and uses that.
edit: made some examples less ambiguous
The term Dependency Injection was coined by Martin Fowler with this article: https://martinfowler.com/articles/injection.html. See how it presents the examples in terms of wiring up components from a configuration, and how it concludes with stressing the importance of "the principle of separating service configuration from the use of services within an application". The article also presents constructor injection as only one of several forms of dependency injection.
That is how everyone understood dependency injection when it became popular 10-20 years ago: A way to customize behavior at the top application/deployment level by configuration, without having to pass arguments around throughout half the code base to the final object that uses them.
Apparently there has been a divergence of how the term is being understood.
[0] https://en.wikipedia.org/wiki/Strategy_pattern
[1] The fact that Car is abstract in the example is immaterial to the pattern, and a bit unfortunate in the Wikipedia article, from a didactic point of view.
DI in Java is almost completely disconnected from what the Strategy pattern is, so it doesn't make sense to use one to refer to the other there.
I'm talking more specifically about Aspect Oriented Programming though and DI containers in OOP, which seemed pretty clever in theory, but have a lot of issues in reality.
I take no issues with currying in functional programming.
But nobody seems to do this diligence. It’s just “we are in a rush. we need X. dependency does X. let’s use X.” and that’s it!
Wrong question. “Are you paid to audit this code?” And “if you fail to audit this code, who’se problem is it?”
Have you ever worked anywhere that said "go ahead and slow down on delivering product features that drive business value so you can audit the code of your dependencies, that's fine, we'll wait"?
I haven't.
Is that greed?
I can find many reasons to be critical of that developer, things like creating a product for a market segment that is saturated, and likely doing so because it is low hanging fruit (both conceptually and in terms of complexity). I can be critical of their moral judgement for how they decided to generate income from their poor business judgment. But I don't thinks it's right to automatically label them as greedy. They may be greedy, but they may also be trying to generate income from their work.
Umm, yes? You are not owed anything in this life, certainly not income for your choice to spend your time on building a software product no one asked for. Not making money on it is a perfectly fine outcome. If you desperately need guaranteed money, don't build an app expecting it to sell; get a job.
Technically true but a bit of perspective might help. The consumer market is distorted by free (as in beer) apps that does a bunch of shitty things that should in many cases be illegal or require much more informed consent than today, like tracking everything they can. Then you have VC funded ”free” as well, where the end game is to raise prices slowly to boil the frog. Then you have loss leaders from megacorps, and a general anti-competitive business culture.
Plus, this is not just in the Wild West shady places, like the old piratebay ads. The top result for ”timer” on the App Store (for me) is indeed a timer app, but with IAP of $800/y subscription… facilitated by Apple Inc, who gets 15-30% of the bounty.
Look, the point is it’s almost impossible to break into consumer markets because everyone else is a predator. It’s a race to the bottom, ripping off clueless customers. Everyone would benefit from a fairer market. Especially honest developers.
That’s got to be money laundering or something else illicit? No one is actually paying that for a timer app?
We could have people ask for software in a more convenient way.
Not making money could be an indication the software isn't useful, but what if it is? What can the collective do in that zone?
I imagine one could ask and pay for unwritten software then get a refund if it doesn't materialize before your deadline.
Why is discovery (of many creation) willingly handed over to a hand full of mega corps?? They seem to think I want to watch and read about Trump and Elon every day.
Promoting something because it is good is a great example of a good thing that shouldn't pay.
AI is making this worse than ever though, I am constantly having to tell devs that their work is failing to meet requirements, because AI is just as bad as a junior dev when it comes to reaching for a dependency. It’s like we need training wheels for the prompts juniors are allowed to write.
The "network sharing" behavior in these SDKs is the sole purpose of the SDK. It isn't being included as a surprise along with some other desirable behavior. What needs to stop is developers including these SDKs as a secondary revenue source in free or ad-supported apps.
Doubt it. This is just one -of many- carrots that are used to entice developers to include dodgy software into their apps.
The problem is a lot bigger than these libraries. It's an endemic cultural issue. Much more difficult to quantify or fix.