<https://github.com/prisma/lift/blob/master/src/ty...
# orm-help
a
https://github.com/prisma/lift/blob/master/src/types.ts#L20 Are the only primitive types really just string, int, float, boolean? Seems like it's missing bytea/blob, Date, Decimal, among others
n
Right now yes (that's why it's still in preview and certainly not production-ready) but we're currently working hard towards making all types of the underlying DB available in the Prisma schema as well.
That's the spec that we're currently on... would love to hear your thoughts on it! 🙂
a
I'll look at it. But before I do, should it really be called
float
? I think all 4 databases support double. And SQLite and Mongo only have double, I think. Or is that left over legacy stuff? =x /nitpick
n
I wouldn't be surprised if it was legacy but can't speak to it myself since I wasn't involved in the spec/implementation 🙈 would be awesome if you could share these thoughts (even the nitpicks) on GitHub as well!
g
@nikolasburk Is there more thorough docs somewhere in ref to extending those types? I’ve been bashing my head for the last 1-2 days trying to get
timestamptz
converted to a
dateTime
scalar type. Apologies for hijacking the thread, just seemed very relevant.
a
My overall impression so far is that you all are working on solving a really hard problem and I'm amazed at what you all have so far. That spec for loosening guarantees kind of scares me a little. So does the best effort thing. But it may be possible for it to work out fine. Like, if they use unsigned bigint with MySQL and postgresql, they're gonna run into problems because postgresql doesn't have unsigned bigint. But it'll be fine if they don't accidentally go over the signed bigint limit. Or like with decimal. It's fine as long as they stay within MySQL's constraints (max 65 for precision, 30 for scale) and Postgresql's constraints (min 1 for precision) Then, you could have some arbitrary type
T
only supported by DB
A, B
and not
C, D
. And that's dangerous. But then, The user only uses DB
A, B
, anyway. So, even though it's clunky and possibly unsafe in the general case, for specific cases, it's fine.
Photon seems to be trying to unify a lot of databases, too. So, it's inevitable that you run into tradeoffs regarding guarantees, I suppose My library is less ambitious. And, so, it can give stronger guarantees for a wider range of data types. And for data types that can't be unified, those are delegated to "adapter libraries" that implement all the DB-specific clauses, data types, behaviours, etc. The "unified library" is really just for people who need to guarantee the same behavior, for all queries that they write.
@Globolobo no worries, I'm curious about it now that you've brought it up, too. My initial gut feeling is that it may not be possible yet. For my library, I had to solve that problem, too. Since the premise is that adapter libraries will want to add their own custom data types. Unsigned bigint for MySQL, interval for postgresql, etc. For my library, the adapters just need to implement the relevant data mappers, define the built in functions that work on the custom data types, and have this custom data type extend some interfaces that are really just phantom types. (So, no actual run time properties/methods need to be implemented)
m
hey @anyhowstep is your library public? 🙂
Like, if they use unsigned bigint with MySQL and postgresql, they’re gonna run into problems because postgresql doesn’t have unsigned bigint. But it’ll be fine if they don’t accidentally go over the signed bigint limit.
Or like with decimal. It’s fine as long as they stay within MySQL’s constraints (max 65 for precision, 30 for scale) and Postgresql’s constraints (min 1 for precision)
this is exactly the problem we’re trying to resolve. we could gloss over the details and say an int is perhaps an int64, but maybe some datasources don’t support it. so we need to get more granular with our data types. for the common SQL flavors, you’ll have a pretty consistent set of “Standard Types”, so String, Int, Float, etc, that will map to
text
,
integer
,
float4
, etc.
That spec for loosening guarantees kind of scares me a little. So does the best effort thing.
what part scares you? 🙂 maybe the spec wasn’t clear because from my perspective your examples motivate making the guarentees looser 😅
by looser guarentees, we’re saying that not every datasource implements… say a Float. Currently we’re saying that we’ll figure out a way to store a Float if it’s a core data type, even if the datasource doesn’t support it.
a
@Matt Mueller (Prisma Client PM)
is your library public?
https://github.com/AnyhowStep/tsql Still needs a lot of work (naming, features, adapter libraries, documentation, etc.), but a bunch of stuff already works. 1. Clone 2.
npm install
3.
npm run sanity-check
(Should build and run tests)
we’re saying that not every datasource implements... say a Float.
Ah. Okay, that makes more sense. So, if they try to use an unsupported data type for a data source, throw a compile-time, or run-time error (hopefully have both)?
That compile-time error seems like the most important bit.
m
cool project! i like the code-first technique you’re using
yah, it would be compile-time
we’re “compiling” our schema file and we know the type mapping based on the datasources you’re including
a
I like the idea of loosening the guarantees, then. It does mean that the compiler will get very large, with all the rules and whatnot, as support for more data sources get added. I think the growing size is fine, since it's just for compiling, and not part of the run-time.
Part of the reason I don't like other "unifying" libraries is that they include code for all their data sources in the run-time. For example, knex generates SQL strings for all its supported dialects in its library, even if you only care about one dialect.
m
@anyhowstep 100% agreed
the compiler will definitely grow in size
a
How simple/complex are the "expressions" supported by Photon.js? I don't think I saw a way to use expressions in the "select" clause of the documentation. For example,
SELECT user.someValue + 50
in SQL. However, I did see expressions in the "where" clause.
Copy code
where: {
    OR: [
      { title: { contains: 'GraphQL' } }, 
      { title: { contains: 'REST' } }
    ],
    published: true,
  },
So, I'm curious how simple/complex these expressions can be, and deeply nested they can get. Are correlated subqueries a thing? Are type casting expressions a thing? Are joins a thing? Can expressions reference properties/columns from multiple tables/entities? etc.
m
good question! right now they’re somewhat simple. there are joins and a couple other things, but nothing too crazy.
in the future, we want to support any postgres expression
for example
(u, x) => x.coalesce(u.firstName, u.lastName)
a
Will there be "loose" guarantees for functions/expressions, too? For example, I believe
STRCMP()
is MySQL-specific. PostgreSQL does not have it.
SIMILAR_TO
is PostgreSQL-specific. MySQL does not have it. Regexes are handled differently across MongoDB, MySQL, PostgreSQL.
m
yep, it’ll be like turning on & off parts of the API based on the capabilities of the database, so turn on
strcmp
for mysql, turn it off for postgres.
we’re about a year or so out from that though
a
What about transactions and isolation levels? Some data sources have transactions, some don't. The ones that do have it don't generally agree on the implementation details. I think mongo 4.0's multi-collection transactions only work on replica sets.
So, stuff like "nested writes" can have partial success on some data sources
And I'm sure that would be undesirable, if users expect these "nested writes"/"insert many" to be atomic operations.
Can users start, commit, and rollback transactions explicitly? Or is it planned? Will it be like the plan for data types, and expressions? Different API for different data sources, to accommodate the different implementation details?
If so, it's starting to look like my project (that's not as complete as Prisma's. So, maybe I should be saying that my project is starting to look like Prisma's...) Except, it seems like Prisma generates the API based on
(schema, data source)
pairs. And mine requires `npm install`ing a separate "adapter" library for each data source a user wants. So, lots of overlap. My project will probably stay SQL-focused and not be as ambitious as Prisma's, though.