hyeonji jeong
02/13/2025, 3:47 AMkotlin
@ImportAutoConfiguration(
value = [ExposedAutoConfiguration::class],
exclude = [DataSourceTransactionManagerAutoConfiguration::class]
)
However, if we add the spring-boot-starter dependency via Gradle using implementation
, ExposedAutoConfiguration is applied automatically. In that case, is it still necessary to explicitly add @ImportAutoConfiguration(ExposedAutoConfiguration::class)
, or is it optional?
Could someone clarify the best practice for this?
Thanks in advance!Nagaraj Hegde
02/14/2025, 4:34 PMlauraldo
02/14/2025, 5:43 PMNagaraj Hegde
02/18/2025, 2:26 AMAndrey V
02/19/2025, 7:20 AMobject Objects : UUIDTable("objects", "id") { val data = json<...>("data"); val prev = uuid("prev").references(id); }
what's the DSL syntax for CTEs to do things like linked list building given the SQL form
WITH objs
AS(
SELECT data, id, prev FROM Objects where id = ${ID}
UNION
SELECT o.data, o.id, o.prev FROM Objects o JOIN objs ON (o.id = objs.prev)
)
SELECT objs.data, objs.id, objs.prev FROM objs;
where ${ID}
is the usual Query.where { id eq ... }
(also, is there a way to create virtual tables for pre-join filtering? I can work around it with a post-join filtering but the virtual table creation helps filter early and clean the data set pre join (and pre recurse))Eve Kolb
02/19/2025, 9:43 AMEndre Deak
02/21/2025, 8:08 PMfetchBatchedResults
load the full Iterable<Iterable<ResultRow>>
into memory? How to deal with a large resultset? Any suggestions?Liz Ainslie
02/25/2025, 12:04 AMUuidTable
, UuidEntity
, etc. that support the experimental kotlin.uuid.Uuid
. I needed it for a project today and went ahead with my own implementation if anyone wants to use it, but I'm still curious as to what JetBrains & contributors think about something like this in DAOmaxmello
02/28/2025, 4:38 PMread = host=pgcluster-replicas port=5432 dbname=postgres
write = host=pgcluster-primary port=5432 dbname=postgres
postgres = host=pgcluster-primary port=5432 dbname=postgres
• We are using schema per service approach where the search path is restricted for each individual services database user to only see the relevant schema(s).
• To properly differentiate between read and write nodes, we use the following dataSource strings: pgcluster-pgbouncer.postgres.svc.cluster.local:5432/read
and pgcluster-pgbouncer.postgres.svc.cluster.local:5432/write
• We are connecting using the following code (we establish two connections in practice, one for read, one for write)
Database.connect(url = "jdbc:postgresql://$dataSource", ...) // dataSource is the string above
When the name of the connection is NOT EQUAL the dbname (so “read” and “write” vs “postgres”), Exposeds SchemaUtils breaks. The behavior is as follows:
• SchemaUtils.listDatabases()
DOES show the correct database name “postgres”
• SchemaUtils.listTablesInAllSchemas() and SchemaUtils.listTables()
show no entries
• any Table.exists()
returns false, even though it exists
• TransactionManager.current().connection.schema
does detect the correct schema though
• All normal operations (querying, create statements etc.) work with no problem, I think the problem lies somewhere in VendorDialect
getAllTableNamesCache
or so, but its very hard for me to find how/where exactly this breaks.
• If I connect using /postges
at the end of the dataSource string it works, but this means we cannot use nodes with different names (read & write)Jocelyne Abi Haidar
03/03/2025, 2:13 PMDon Kittle
03/04/2025, 3:37 PMColumn<*>
type, though, which cannot be used in a .where
as there don't seem to be extension functions like .eq
defined for Column<*>
. What is the proper way to do this? My column finder function is:
fun findColumn(table: Table, column: String): Column<*> =
table.columns.find {
it.name.lowercase() == column.lowercase() ||
it.name.lowercase() == column.toSnakeCase() ||
it.name.lowercase() == column.toKebabCase()
} ?: error("Could not find column $column in table ${table.tableName}")
Mario Andhika
03/14/2025, 4:18 AMandrewg
03/17/2025, 9:15 PMFedor Erofeev
03/18/2025, 3:01 PMmaarten ha
03/19/2025, 9:28 AMMuhammad Fazalul Rahman
03/19/2025, 4:21 PMClientHttpRequestFactory
while upgrading Exposed to 0.57.0
or later? The breaking change seems to have been introduced in Exposed 0.57.0
. Posting the trace in 🧵Emanuele Iannuzzi
03/20/2025, 6:24 PMUser
in the Users
table (either with the User.new
DAO method or the Users.insert
method), an enumeration field accountstatus
that is described as non-null, with a default value, makes the insertion fail as of `null value in column accountstatus
violates not-null constraint` .
Here is the code for the table definition and the user DAO:
object Users : UUIDTable() {
val email = varchar("email", 255).uniqueIndex()
val role = enumeration<Role>("role")
val registrationDate =
datetime("registrationDate").default(Clock.System.now().toLocalDateTime(TimeZone.currentSystemDefault()))
val disabledDate = datetime("disabledDate").nullable()
val accountStatus = enumeration<AccountStatus>("accountStatus").default(AccountStatus.ACTIVE)
enum class Role {
PROFESSOR, STUDENT
}
enum class AccountStatus {
ACTIVE, DISABLED
}
}
class User(id: EntityID<UUID>) : UUIDEntity(id), UserData {
companion object : UUIDEntityClass<User>(Users)
override var email by Users.email
override var role by Users.role
override val registrationDate by Users.registrationDate
override var disabledDate by Users.disabledDate
override var accountStatus by Users.accountStatus
val sessions by Session referrersOn Sessions.userId
}
Here is the code that throws the exception:
// ...
transaction {
User.new {
email = email /* logic omitted */
role = role /* logic omitted */
}
}
Moreover, could someone tell me if there is a way, even if client-side and not database-enforced, to construct default values based on properties of the soon-to-be inserted row? I've read about the clientDefault
method but I can't seem to figure out how to build a default value based on the various Column
s.
Thanks in advance, and sorry for the poor English.Andres
03/24/2025, 9:49 PMupsert
that would append a value to an existing array column? I am trying to do a bulk update.Dominick
03/25/2025, 5:27 AMUsers.select { concat(Users.firstName, stringLiteral(" "), Users.lastName) like "%${search}%") }
but that's not valid.
Planned here https://kotlinlang.slack.com/archives/C0CG7E0A1/p1502141543702309?thread_ts=1502104984.024287&cid=C0CG7E0A1 but not sure if it was ever addedAndromadus Naruto
03/25/2025, 4:23 PMHelene Ballet
03/27/2025, 4:55 PMcreationDate
field for instance, or a metadata
field referencing a different table).
I am using a DAO approach and I was hopping to do something like :
abstract class TableWithMetadata(name: String) : LongIdTable(name) {
val creationDate = datetime("creation_date")
}
object DataTable : TableWithMetadata("data") {
val title = varchar("title", length = MAX_STRING_LENGTH)
}
with
abstract class DbDtoWithMetadata<T: TableWithMetadata>(id: EntityID<Long>) : LongEntity(id) {
var creationDate by T.creationDate
}
class DataDbDto(id: EntityID<Long>) : DbDtoWithMetadata<DataTable>(id) {
companion object : LongEntityClass<DataDbDto>(DataTable)
var title by DataTable.title
}
and call it like that :
transaction {
DataDbDto.new {
creationDate = LocalDateTime.now() // Or find a more generic way, so it applies automagically to all DbDto extending DbDtoWithMetadata
title = data.title
}
}
but this doesn't work due to a compilation error in DbDtoWithMetadata
: Type parameter 'T' cannot have or inherit a companion object, so it cannot be on the left-hand side of a dot.
Do you have a workaround or a better way to handle this use case ?Francesco Di Sciascio
04/09/2025, 12:56 PMval idRecordInserted = MyTable.insertAndGetId {
it[id] = -1
it[code] = myDataPassedInBodyOfRequest.code
...
}
Notice that I cannot remove the ID parameter because I will obtain an exception 'MyTableObject.id is not in record set', where 'MyTableObject' is the object of type IdTableLong defined to map the table as follows:
object MyTableObject: IdTable<Long>("my_table_name"){
override val id = long("id").entityId()
val code= varchar("code", 255)
...
}
Is there a way to automatically recover the ID of the inserted record, or do I need to read the sequence current value inside an exec() fun invocation?
Thanks allOleg Babichev
04/09/2025, 5:57 PMdebop
04/14/2025, 5:14 AMdebop
04/15/2025, 2:49 AMdebop
04/15/2025, 3:30 PMinterface AuditableEntity {
val createdAt: Instant
val updatedAt: Instant
}
And then assign values to createdAt and updatedAt automatically via an AuditableEntityLifecycleInterceptor, just before INSERT or UPDATE.
Is this kind of approach feasible in Exposed 0.60.0?debop
04/20/2025, 6:21 AMdebop
04/26/2025, 11:06 PMQuery.fetchBatchResults
for Coroutines. returns Flow<List<ResultRow>>
It support global unique id (eg. snowflakeId) which is not auto inc
package io.bluetape4k.exposed.sql
import kotlinx.coroutines.flow.Flow
import kotlinx.coroutines.flow.channelFlow
import org.jetbrains.exposed.dao.id.EntityID
import org.jetbrains.exposed.sql.Column
import org.jetbrains.exposed.sql.EntityIDColumnType
import org.jetbrains.exposed.sql.FieldSet
import org.jetbrains.exposed.sql.Op
import org.jetbrains.exposed.sql.Query
import org.jetbrains.exposed.sql.ResultRow
import org.jetbrains.exposed.sql.SortOrder
import org.jetbrains.exposed.sql.SqlExpressionBuilder.greater
import org.jetbrains.exposed.sql.SqlExpressionBuilder.less
import org.jetbrains.exposed.sql.and
/**
* [FieldSet] 에서 [SuspendedQuery.fetchBatchResultFlow] 메소드를 코루틴 환경에서 사용할 수 있도록 확장한 함수입니다.
*
* 이 함수를 사용하려면, 조회하는 첫번째 컬럼이 Int, Long 수형이어야 합니다.
*
*
* // 10개씩 배치로 읽어온다
* val batchedIds: List<List<Int>> = ProductTable
* .fetchBatchResultFlow(10)
* .buffer(capacity = 2)
* .map { rows -> rows.map { it[ProductTable.id].value } }
* .toList()
* */
fun FieldSet.fetchBatchResultFlow(batch: Int = 1000, sortOrder: SortOrder = SortOrder.ASC): Flow<List<ResultRow>> =
Query(this.source, null).fetchBatchResultFlow(batch, sortOrder)
/**
* [SuspendedQuery.fetchBatchResultFlow] 메소드를 코루틴 환경에서 사용할 수 있도록 확장한 함수입니다.
*
* 이 함수를 사용하려면, 조회하는 첫번째 컬럼이 Int, Long 수형이어야 합니다.
*
*
* // 10개씩 배치로 읽어온다
* val batchedIds: List<List<Int>> = ProductTable
* .select(ProductTable.id)
* .fetchBatchResultFlow(10)
* .buffer(capacity = 2)
* .map { rows -> rows.map { it[ProductTable.id].value } }
* .toList()
* */
fun Query.fetchBatchResultFlow(batch: Int = 1000, sortOrder: SortOrder = SortOrder.ASC): Flow<List<ResultRow>> =
SuspendedQuery(this@fetchBatchResultFlow.set, null).fetchBatchResultFlow(batch, sortOrder)
/**
* [Query.fetchBatchedResults] 메소드를 코루틴 환경에서 사용할 수 있도록 확장한 함수를 제공하는 클래스입니다.
*/
open class SuspendedQuery(set: FieldSet, where: Op<Boolean>? = null): Query(set, where) {
/**
* [Query.fetchBatchedResults] 메소드를 코루틴 환경에서 사용할 수 있도록 확장한 메소드입니다.
*
* 이 함수를 사용하려면, 조회하는 첫번째 컬럼이 Int, Long 수형이어야 합니다.
*
*
* // 10개씩 배치로 읽어온다
* val batchedIds: List<List<Int>> = ProductTable
* .select(ProductTable.id)
* .fetchBatchResultFlow(10)
* .buffer(capacity = 2)
* .map { rows -> rows.map { it[ProductTable.id].value } }
* .toList()
* */
fun fetchBatchResultFlow(batchSize: Int = 1000, sortOrder: SortOrder = SortOrder.ASC): Flow<List<ResultRow>> {
require(batchSize > 0) { "Batch size should be greater than 0." }
require(limit == null) { "A manual `LIMIT` clause should not be set. By default, `batchSize` will be used." }
require(orderByExpressions.isEmpty()) {
"A manual `ORDER BY` clause should not be set. By default, the auto-incrementing column will be used."
}
val comparatedColumn = try {
set.source.columns.first() // { it.columnType.isAutoInc } // snowflakeId 같은 Global Unique ID 도 지원하기 위해
} catch (_: NoSuchElementException) {
throw UnsupportedOperationException("Batched select only works on tables with an auto-incrementing column")
}
limit = batchSize
(orderByExpressions as MutableList).add(comparatedColumn to sortOrder)
val whereOp = where ?: Op.TRUE
val fetchInAscendingOrder =
sortOrder in listOf(SortOrder.ASC, SortOrder.ASC_NULLS_FIRST, SortOrder.ASC_NULLS_LAST)
fun toLong(autoIncVal: Any): Long = when (autoIncVal) {
is EntityID<*> -> toLong(autoIncVal.value)
is Int -> autoIncVal.toLong()
else -> autoIncVal as Long
}
return channelFlow {
var lastOffset = if (fetchInAscendingOrder) 0L else null
while (true) {
val query = this@SuspendedQuery.copy().adjustWhere {
lastOffset?.let { lastOffset ->
whereOp and if (fetchInAscendingOrder) {
when (comparatedColumn.columnType) {
is EntityIDColumnType<*> -> {
(comparatedColumn as? Column<EntityID<Long>>)?.let {
(it greater lastOffset)
} ?: (comparatedColumn as? Column<EntityID<Int>>)?.let {
(it greater lastOffset.toInt())
} ?: (comparatedColumn greater lastOffset)
}
else -> (comparatedColumn greater lastOffset)
}
} else {
when (comparatedColumn.columnType) {
is EntityIDColumnType<*> -> {
(comparatedColumn as? Column<EntityID<Long>>)?.let {
(it less lastOffset)
} ?: (comparatedColumn as? Column<EntityID<Int>>)?.let {
(it less lastOffset.toInt())
} ?: (comparatedColumn less lastOffset)
}
else -> (comparatedColumn less lastOffset)
}
}
} ?: whereOp
}
val results = query.iterator().asSequence().toList()
if (results.isNotEmpty()) {
send(results)
}
if (results.size < batchSize) break
lastOffset = toLong(results.last()[comparatedColumn]!!)
}
}
}
}
Matyáš Vítek
05/10/2025, 6:18 PMdebop
05/12/2025, 2:30 PM