Sooner or later we always hit the n+1 query problem which could only be resolved by a query builder or just plain old sql.
It always was a mess and these days I can't be bothered to try it even anymore because it has cost me a lot of hours and money.
Congrats, you now have your own little ORM.
Op is never implying they intend to maintain one to one correspondence between the DB and objects and do that through manipulating objects only. Mapping hand written queries results to structs and updating the DB yourself on the basis of what is in structs is not at all an ORM.
Through that lens, the parts where you load and save object state are redundant. You're going to throw those objects away after the request anyway. Just take your request and build an UPDATE, etc. Use record types merely as a way to define your schema.
When I was in RoR world, pretty much every N+1 query I saw was due to lack of RTFM.
[1]: I made this up
Because it's half a dozen joins and hence no N+1 query but actually N*6+1 queries...
And yes, RTFM is nice, problem is: it's my fucking partners that should've done this before we shipped it to the customer which they abandoned and I did not.
On the other hand an async orm sounds like (n+1)(n+2)+...+(n+m) Problem
[1]: https://diesel.rs/
The codegen part makes all columns and tables and stuff checked at compile-time (name and type) like Diesel, with a query builder that's more natural like SeaORM. I hope the query builder does not end up too magical like SQLAlchemy with its load of footguns, and stay close in spirit to Diesel that's "write sql in rust syntax".
I think time will tell, and for now I'm keeping my Diesel in production :D
I'm mainly use sqlx, it's simple to use, there's query! and query_as! macro which is good enough for most of the case.
I remember fighting with handling enums in relations for a while, and now just default to manually mapping everything.
SQLx sucks at dynamic queries. Dynamic predicates, WHERE IN clauses, etc.
For SQLx to be much more useful, their static type checker needs to figure out how to work against these. And it needs a better query builder DSL.
It doesn't end up being too bad though, except for the loss of compile time syntax checking. Manually handling joins can be kind of nice, it's easier to see optimizations when everything is explicit.
> The common wisdom is to maximize productivity when performance is less critical. I agree with this position. When building a web application, performance is a secondary concern to productivity. So why are teams adopting Rust more often where performance is less critical? It is because once you learn Rust, you can be very productive.
> Productivity is complex and multifaceted. We can all agree that Rust's edit-compile-test cycle could be quicker. This friction is countered by fewer bugs, production issues, and a robust long-term maintenance story (Rust's borrow checker tends to incentivize more maintainable code). Additionally, because Rust can work well for many use cases, whether infrastructure-level server cases, higher-level web applications, or even in the client (browser via WASM and iOS, MacOS, Windows, etc. natively), Rust has an excellent code-reuse story. Internal libraries can be written once and reused in all of these contexts.
> So, while Rust might not be the most productive programming language for prototyping, it is very competitive for projects that will be around for years.
It is this culture thing makes adopting Rust for web apps worthwhile - it counters the drawback of manual memory management.
If you hire an engineer already familiar with Rust you are sure you get someone who is sane. If you onboard someone with no Rust background you can be pretty sure that they are going to learn the right way (tm) to do everything, or fail to make any meaningful contribution, instead of becoming a -10x engineer.
If you work in a place with a healthy engineering culture, trains people well, with good infra, it doesn't really matter, you may as well use C++. But for us not so lucky, Rust helps a lot, and it is not about memory safety, at all.
As time passes, the more I feel a minority in adoring rust, while detesting Async. I have attempted it a number of times, but it seems incompatible with my brain's idea of structure. Not asynchronous or concurrent programming, but Async/Await in rust. It appears that most of the networking libraries have committed to this path, and embedded it moving in its direction.
I bring this up because a main reason for my distaste is Async's incompatibility with non-Async. I also bring this up because lack of a Django or SQLAlchemy-style ORM is one reason I continue to write web applications in Python.
So you use gevent/greenlet?
It’s really not that bad, you might just need a better mental model of what’s actually happening.
And in the opposite situation, if you call an async function then you are doing IO so your function must be either async or blocking, there's no third way in this direction, so when you're doing IO you have to make a choice: you either make it explicit (and thus declare the function async) or you hide it (by making a blocking call).
A blocking function is just a function doing IO that hides it from the type system and pretend to be a regular function.
There's a fundamental difference between CPU heavy workload that keep a thread busy and a blocking syscall: if you have as many CPU heavy tasks as CPU cores then there's fundamentally not much to do about it and it means your server is under-dimensioned for your workload, whereas a blocking syscall is purely virtual blocking that can be side-stepped.
Also, the IO and the execution being completely tied (the executor provides the IO) is a wrong choice in my opinion. Hopefully in the future there is a way to implement async IO via Futures without relying on the executor, maybe by std providing more than just a waker in the passed-in context.
It's more a consequence of having let tokio becoming the default runtime instead of having the foundational building blocks in the standard library than a language issue. But yes, the end result is unfortunate.
Creating your own file format is always difficult. Now, you have to come up with syntax highlighting, refactoring support, go to definition, etc. When I prototype, I tend to rename a lot of my columns and move them around. That is when robust refactoring support, which the language's own LSP already provides, is beneficial, and this approach throws them all away.
Prisma is popular enough it also has LSP and syntax highlighting widely available. For simple DSL this is actually very easy build. Excited to have something similar in Rust ecosystem.
In this case, for example, it looks like the generated code needs global knowledge of related ORM types in the data model, and that just isn't supported by proc-macros. You could push some of that into the trait system, but it would be complex to the point where a custom DSL starts to look appealing.
Proc-macros also cannot be run "offline", i.e. you can't commit their output to version control. They run every time the compiler runs, slowing down `cargo check` and rust-analyzer.
Ideally I would use something akin to Go Jet.
In my experience, Dynamo and other NoSQL systems are really expressive and powerful when you take the plunge and make your own ORM. That’s because the model of nosql can often play much nicer with somewhat unique structures like
- single table patterns - fully denormalized or graph style structures - compound sort keys (e.g. category prefixed)
Because of that, I would personally recommend developing your own ORM layer, despite the initial cost
Developing your own ORM is almost always a waste of time and a bad idea.
Great to see some development in this for Rust, perhaps after it becomes stable I may even switch my SaaS to it.
Your code ends up using the driver raw in these cases, so why not just use the driver for everything? Your codebase would be consistent at that point
You can extend diesel (and probably many other orms, Diesel is just particularly easy here) to support any db feature you want.
> It is highly unlikely that an ORM provides support, much less a good abstraction, over features that only 1/N supported DBMS have.
That depends on orm flexibility and popularity. It may not provide support OOTB, but can make it easy to add it.
> Your code ends up using the driver raw in these cases, so why not just use the driver for everything? Your codebase would be consistent at that point
Main point of using orm for me is that I have type verification, raw (as in text) breaks too easily.
Might have improved since last I checked, but I was pretty confused.
Case in point Django is really good about DB-specific functionality and letting you easily add in extension-specific stuff. They treat “you can only do this with raw” more or less as an ORM design API issue.
My biggest critique of Django’s ORM is its grouping and select clause behavior can be pretty magical, but I’ve never been able to find a good API improvement to tackle that.
The simplest example is you can't build a Django object with a collection on it. Take the simplest toy example: a todo list. The natural model is simple: a todo list has a name and a list of items. You can't do that in Django. Instead you have to do exactly what you would do in SQL: two tables with item having a foreign key. There's no way to just construct a list with items in it. You can't test any business rules on the list without creating persistent objects in a db. It's crazy.
So yeah, Django lets you do loads with the relational side, but that's because it's doing a half-arsed job of mapping these to objects.
But then you have actual properties on your todo list. So even in your object model you already have two classes, and your todo list has a name and a list of items.
So there's not one class, there's two classes already.
As to "having a list", Django gives you reverse relations so you can do `my_list.items.all()`. Beyond the fact that your persistence layer being a database meaning that you need to do _something_, you're really not far off.
One could complain that `my_list.save()` doesn't magically know to save all of your items in your one-to-many. But I think your complaint is less about the relational model and much more about the "data persistence" question. And Django gives you plenty of tools to choose how to resolve the data persistence question very easily (including overriding `save` to save some list of objects you have on your main object! It's just a for loop!)
You can only do `my_list.items.all()` if you've already saved the related records in the db. And if you do something like `my_list.items.filter(...)` well that's another db query. A proper ORM should be able to map relationships to objects, not these thinly veiled db records. See how SQLAlchemy does it to see what I mean. In SQLAlchemy you can fully construct objects with multiple layers of composition and it will only map this to the db when you need it to. That means you can test your models without any kind of db interaction. It's the whole point of using an ORM really.
Making that 99% smaller, simpler and automatically mapping to common types makes development a lot easier/faster. This applies to pretty much any higher level language. It's why you can write in C, but embed an ASM fragment for that one very specific thing instead of going 100% with either one.
I also have a relatively successful saas that uses Prisma and it’s been phenomenal. Queries are more than fast enough for my use case and it allows me to just focus on writing more difficult business logic than dealing with complex joins