cf + sql server (version agnostic): i'm wondering...
# cfml-general
w
cf + sql server (version agnostic): i'm wondering about doing multiple queries that all rely on a 'session-specific' flag, that is a sql server session-specific flag (setting identity_insert on/off which is only valid for a single sql server session). if i have multiple, separate cfqueries on a test page and each just does
SELECT @@spid AS theSpid
i get the same value from those multiple queries, but i'm concerned that that's just a reflection of connection pooling, rather than some kind of symmetry between a single cf request and single sql server session. does anyone know if that's true? if i were to wrap all the queries in a cftransaction would that truly 'bind' them all together into a single 'session' from sql server's standpoint? it just feels like there's nothing preventing cf from opening another connection (and therefore a new @@spid) if it needs to during the lifecycle of the request. i don't want to presume that all cfqueries in a given request actually belong to the same sql session if it's even a little bit inconsistent
r
Can all of the lumps of SQL that rely on that single value of @@spid be part of the same CFQUERY or queryExecute()? That should ensure they get treated more atomically?
w
theoretically yes, but i'm changing from an INSERT..SELECT batch approach to inserting one by one because i need to log the copy to another table. so if i have 1000 rows i need to copy, that would mean at least a couple thousand statements in the single cfquery. i just need to log successful copy of each row into a logging table
d
Can you do that copying with a trigger, or pack the whole operation into a stored proc?
w
i just didn't want to turn a single 1000 row insert with SET IDENTITY_INSERT ON / OFF all in a single query into 1002 separate cfquery blocks
stored proc possible but it would be a massive refactor of what i've got to date
d
Logging ideally could be more a db-side thing, not app-side.
w
normally i agree
d
trigger then?
r
@Dave Merrill ninja'd me with the trigger.
"normally i agree" LOL.
w
not a trigger appropriate situation, it's for data migration from a scheduled task
ultimately, i'm just trying to understand the correlation between a cf connection to a sql server db and how sql server itself views that connection from a 'session' standpoint
my tests i believe are too simple to put my mind at ease so far
r
yeah, i would have the same concern. it's where I would buy my dba a beverage and pick his brain.
d
dba's not going to know about cfquery etc
w
i did see this article but it doesn't really speak to sessions and i can't infer from it what i'm actually after: https://helpx.adobe.com/coldfusion/kb/database-connections-handled-coldfusion.html
d
and I agree that counting on "observed behavior" like it's "how it works" is such a trap...
r
we're lucky in that regard. one of ours used to do a bit of CF development. (I hired him and then he decided he wanted to work for a different team.) 🙂
to be clear, given your comment above, you wouldn't have to have 1002 separate cfquery blocks... they can all be inside the same cfquery.
w
that's a pretty large 'single statement' to pass in a single query, it's close to what i feel is the max number of single queries one should have inside a single executed block
1000 inserts + 1000 other inserts into a log table
i have found 2000 to be a personal limit based on my experiences
with sql server that is
r
agreed. we've bumped into it as well at various times in the past.
w
if you push 5000 you're asking for it in my experience. regardless, i really just kind of want to know definitively how sql sessions are represented across multiple connection-pooled queries
this just reinforces my concerns about assuming same-session, including the comments for the top answer: https://dba.stackexchange.com/questions/13698/what-is-the-difference-between-a-connection-and-a-session/13700#13700
if the sql session is for most intents and purposes equivalent to a 'cf db connection' and could bridge not only cfqueries in a single request but also all other queries executing from cf in other requests, then setting identity_insert in one of those could potentially affect any other query trying to do the same thing in another table - you can only set identity insert for a table once in a single session, another request to set it for a different table will bomb in the same session
i suppose i could just truncate the database and pretend i know nothing about it
would solve my problem
r
i've tried that. sub-optimal outcome.
w
but it dovetails nicely with my unplanned career change
🤣 1
r
true. we're happy to provide references, if needed.
👍 1
there's a part of this that feels to me like i don't think i would want to have something that relies on such a low-level thing that could be subject to all sorts of different factors like db version. driver version, configuration of everything between my code and the data, including possibly what else is going on the servers (application and database) that i have no control over.
2
your "data migration from a scheduled task" would indicate this isn't a one-off thing?
w
correct. i think this is leading me to the conclusion that the target table should have any autoidentity removed from it, so it may all be irrelevant, other than trying to answer what i feel is a salient question about the correlation to a session. so, my problem may have been avoided, but my curiosity is still, for lack of a better term, aroused.
d
Logging really feels like a trigger thing to me. You may even be able to write a generic stored proc that logs any new rows from any table, assuming everything has a unique pk.
r
i'd lean toward doing something like this via stored proc and triggers. although the truncate option is probably more interesting. unfortunately, neither of those address the underlying curiosity, i recognize.
g
I am unsure about memory / CPU trade-offs... but could you... Create a generic (CF) logging function and call that (logger) from a function that does the insert of REAL data? Possibly have that inside a transaction - one insert and one log ) In a loop for as many rows as you need? In the below sqlLog : the columnNames_and_values array: is an array of structs of columnName and value pairs - but of course use whatever collection type you're happiest with 🙂 Also (of course) - alter the "placement" of the IdentityValue where you need it. Eg. is it one identity per row of the 1000 or is it 1 identity per 1000?? In the example below I used 1 per individual insert.
Copy code
function sqlLogger(string sourceTable, string logTableName, array columnNames_and_values, numeric identityColumn){
    for(myPair in columnNames_and_values{
        do_needed_data_manipulation_here; //Add identityColumn to struct as needed
        do_other_needed_STUFF_here; //As required
        insert into arguments.logTablename (dataColumn, dataValue) values (myPair.columnName, myPair.value); 
    }
}

function saveMyStuff(...) {
    do_needed_data_manipulation_here; //As required
    do_other_needed_STUFF_here; //As required
    insert_My_Real_Data(); //getting identity If Needed
    sqlLogger(theSourceTable, theLogTableName, arrayOfStructs, Identity_from_above_line);
    
}

for(my_Indiovidual_save IN ALL_my_data_to_save) {
    add_a_transaction_block_here_maybe {
        massaged_Indiovidual_save = massage_as_needed(my_Indiovidual_save);
        saveMyStuff(massaged_Indiovidual_save);
    }
}
Hope that reads, OK.... Gavin.
w
just to bring this topic to something of a close (i still haven't found the sql session/cfquery correlation). for various reasons, i decided to remove the identity column from the target table of my data migration, and avoid the identity_insert issue entirely. however, you cannot simply remove identity once it's set on a column, you have to recreate the column or table in toto if you do it the normal way via ssms or scripting. add to this that the two target tables i would need to remove identity for have 20 mil and 50 million rows respectively, so pain, lot's of pain. however, i found this solution https://stackoverflow.com/a/26048363 using SWITCH TO (never heard of it before) and can confirm that after creating the schema for the target table (generated from the source table) minus the identity on the pk column and executing the SWITCH TO statement, the effect was just about instantaneous, the data was in the target table from the source table without having to copy or duplicate any data. one to bookmark for sure
👍 1
j
To your original question: in Adobe CF a single request will be bound to a single connection from the pool. The connection does not get released to the pool between queries, it only ghets released at the end of the request. In Lucee, the connection may be released to the pool between queries, which implies you may get a different connection between separate cfquery tags in one page. Once you wrap everything in a single transaction, it is a single connection in both.
And to solve your problem with logging to a different table: you can grab the result from an insert statement using the OUTPUT clause. See https://www.c-sharpcorner.com/UploadFile/b1df45/output-clause-in-sql-server/ for examples.
I don't have SQL Server at hand, but it might be possible to chain that to something like:
Copy code
INSERT INTO tableLog (id, value)
SELECT id, value
FROM (
  INSERT INTO tableDest (value)
  OUTPUT inserted.id, inserted.value
  SELECT value
  FROM tableSrc
  WHERE createCopy = true
)
w
ultimately my original issue was less about logging concepts and more specifically about setting identity insert on/off repeatedly in a batch, vs setting it once - looping a bunch of single row statements - then setting back off, and how since identity_insert is sql server session-specific what the relationship is between cfquery/connection pooling and how sql server sees (or doesn't see) multiple queries to be within the same 'session'