Is anyone using AI to write cf code, review it, et...
# cfml-general
d
Is anyone using AI to write cf code, review it, etc? Any experience with which AIs are better or worse for cf? I've had mixed experience with claude.ai, only a tiny bit of testing. Partly surprisingly good, partly pretty hard to steer.
d
copilot in vscode is working out pretty good. the autocompletion of comments, functions, missing params is pretty good.
πŸ‘ 1
r
I don't use it to write large blocks of code, but copilot in VSCode can be helpful.
d
I keep referring back to chatgpt.com (free version) to have conversations.
r
Also comes in handy for indenting code and sorting structs, etc.
πŸ‘πŸΎ 1
☝🏻 1
d
or for generation of test/mock data
πŸ‘ 1
Dave, a couple times in the last week it has really surprised me when it suggests code without a comment. left me thinking "how did it know I want that next?".
πŸ˜‚ 1
Mostly, I already know what I want to code, and copilot is just helping me complete it faster.
If you are wondering if it can generate a complete and working app that uses Coldbox framework with User class and UserService, well, its almost there. I've tried it and gave me 95% of workable code, with good explanations of what each piece does.
Its just not good enough for someone that doesn't know Coldbox
r
just remember that all code written my "AI" aka LLM is owned by the AI in part, and any code you submit to an AI is now owned by the AI. Not to sound overly paranoid, but be careful what you give those systems access to or your whole code base could be exposed and you would not know it till someone asks the AI : show me the source code for this website.....
☝️ 1
☝🏻 1
b
I'm using Copilot in VSCode, but it's always pretty hit and miss
I get better milage out of it for Java, than CF.
Not only is it wrong about 50% of the time, but it's always so sure it's right you have to be extra cautious of its answers
βœ… 1
r
Still hallucinates a bit
b
It does better at predicting my next line of code, or the rest of my current line with it's auto-complete than it does answering actual questions about CF
☝🏾 1
b
One metaphor is to consider it a "junior developer"... at least right now. It probably knows Java better because Java code is more available.
b
Yeah, "junior" for sure. It can follow simple instructions, but will get "confused" and off in the weeds pretty quick with complexity that requires it to "think" at all
πŸ’― 2
b
I also use the VSCode Copilot integration, and agree about it being hit and miss. I’ve definitely had it generate code that at a glance looked great only to discover that it made a bunch of stuff up. I find it’s most useful for autocompleting more basic things and to set up structured data. I have noticed that it works a lot better with other more widely-used scripting languages, I assume because it has a bigger code pool to pull from.
g
I agree with @bendur I tried php and cf and it generated better code for php instead of cf
πŸ’― 1
d
Just moments ago. In each file I typed what is underlined in neon green and copilot autocompleted the lines of code enclosed in the purple curly brackets all at once. Its doing it more and more. Its aware of my open files, maybe even the project code base.
only had one typo. table column year should have been planyear
junior level stuff but still helps
a
I'm using Amazon Q as we have strict policies about exposing code to 3rd party systems. It's quite good at some things (and does hallucinate a lot!) but improving all the time. The other thing about these AIs is that you are training the AI each time you use it (particularly with the accept / decline) so you are training a co-worker who will replace you πŸ™‚ Avoiding using an AI is head-in-the-sand stuff so use them, but I find it interesting how we are so willing to train things to replace us!
πŸ˜‚ 1
c
> Avoiding using an AI is head-in-the-sand stuff so use them, but I find it interesting how we are so willing to train things to replace us! The other thing that doesn't seem to bother people is the amount of additional energy they require currently. To me that's a reason to avoid them, or at least use them more sparingly/mindfully.
🌳 2
g
Usually I do some JavaScript help if I need I don’t ask coldfusion based questions much Even if I have to Usually there are some functions it gives me to convert Because all are available on cflib and it just simplifies to fetch it instead of me searching for me
b
> you are training the AI each time you use it That's actually been one of the really frustrating things for me. With ChatGPT and Copilot at least, you specifically CANNOT teach it anything, and it will tell you that. If I ask it a question about COldFusion and it gives me a totally wrong answer and I tell it the correct answer, I can ask it if it will remember that correct answer the next someone asks and udpate its data model and it will straight up say that no, the information we discuss cannot in any way be stored or put back into the system for future use by myself in the future or any other user. It literally will not self-correct. It's not even capable of self-reporting bugs or issues to its developers if I ask it to. And what really pisses me off about it is I'll get onto it for staying something stupidly wrong (insulting AI is a pass-time of mine) and it will apologize and say something trite like "I'll remember to do this better in the future" and when I press it on the issue and say, "but you really WON'T remember and you WON'T do better in the future since your model isn't allowed to use this information and update yourself" it will totally admit that I'm right and it won't remember and it won't do better. I just want to slap it sometimes, lol
πŸ’― 2
πŸ˜‚ 1
I get there are obvious issues like "Google bombing" worked where bad actors could train incorrect information for malicious purposes, but there has to be some ability for it to at least confirm if the information I'm telling it is actually correct, and in those cases correct itself.
The sort of thumbs up/thumbs down ability IMO is more meta training insofar as was the format of the information useful, not really whether the information itself was actually correct.
d
yeah im my example its definitely not learning how I code, it just feels like its expanding its context awareness, because it will suggest code completion that is relevant to my task but the those files are not open, although they were open maybe minutes before. And/or its expanding its context awareness to the entire code base.
r
some LLMs have the thumbs up/down option, but they are "reasearch" LLMs and not really for "production" use (so they say). I saw an article that China's LLMs are 50x better in some cases, so maybe their's are better at writing code.... but I would not put my actual code in any of them since I don't trust they would not steal the idea, you know, based on their history w/ tech and producing "knock-offs"
d
I've been able to tell claude what's wrong about its answer, it apologized and fixed it, but it won't remember for next time, or if someone else asks similar questions.
context windows are getting bigger over time I understand, but not quickly
g
all the above detailing out, seems we can't teach AI. to me its advanced designed algo which takes information from different places and use its modal to combine and give some answer which is not correct but neither wrong, so it does some analyses and show some good results, but all it is doing is googling
πŸ’― 1
b
It is better at Googling than Google, but it IS doing more than just Googling. Google cannot write or explain code.
πŸ’― 1
Now that said-- I use AI more often for a more targeted version of Google, than I do for actually "thinking" for me. I can typically find hard-to-find information faster via ChatGPT since it understands the nuances of my question better than Google's clunky keyword matching. And that is where a lot of the value lies for me.
πŸ‘ 2
πŸ‘πŸΎ 1
j
I've had fun this week trying to get some ACF code to work on Lucee. It keeps offering a suggestion, I try it and then tell ChatGPT "it doesn't work" and it's like "Oh yeah - you should really do this". I'm like why didn't you suggest that in the first place! But then the next suggestion doesn't work and we just go round and round til it's outputting something that would never work LOL
b
yeah, I call that the "death spiral", when it starts to get so confused that it gets stuck in a loop suggesting the same broken code over and over, or just toggling back and forth between two examples that don't work. It's infuriating as it seems to just turn into a slobbering idiot who can't remember what it JUST suggested to you. It will say, "I apologize, the previous code will not work, but here is some code that will work" and them proceed to give you the EXACT SAME CODE LINE FOR LINE that it just gave you. Or, may favorite-- it will just start removing parts of the code for no reason that you had it add in earlier. It does this for Java too and I think it's just a limitation of the AI in general. When I'm trying to get it to generate code and it starts to death spiral, you just have to cut your losses. It will never even get back to a previous semi-working example, it will only go downhill, lol
d
Try claude.ai, just out of curiosity, let us know how that goes.
j
Yes! At this point I start laughing and embrace my job security (for now) πŸ™‚ @Dave Merrill It would be interesting to open several of them and feed them the same questions, I'll try that next time!
g
I just tried something and it gave me. A repeated code on ChatGPT and later I use some harsh words It ignored those and gave me same code and it also gave me a function which does not exists in acf
So definitely it picks from various sources like someone create d a udf and it picks up and use that and show as an code but that will never work
a
I didn't say (or didn't mean to say!) that you could teach it new things (that would come with it's own world of trouble!), but some of the LLMs will track what you accept and also the way you ask questions. That's what I meant by us training them. I did try and teach ChatGPT about BoxLang πŸ™‚
When I write React code it's very useful, when I write CFML it is not very useful.
As for energy consumption - I think that is a valid point. I see this more as a failure of humans rather than AI in that environmental impact is not given much consideration as they are in an arms race. Deepseek has already shown that it's possible to use fewer resources (an AI making an AI redundant?!). Energy infrastructure needs a massive overhaul worldwide - there are some good solutions, but they all require co-operation and investment (Internationally and domestically) but we seem to be increasingly in a "short-term" mindset, partly due to elections typically being in 4-5 year cycles so politicians only look 4 years ahead rather than 10-20 years that this needs. puts away soapbox before goes too far off topic!
βž• 1
c
@aliaspooryorik Spot on, John. The argument is that AI will solve its own energy/water issues, but wouldn't it have been better to sort that out first before rolling it out to the whole world?
b
Our ColdFusion slack has just gone full circle to Cold Fusion energy. Good work all! βš›οΈ
🀘 1
j
"Cold fusion" engineers are probably like - why the heck is this AI constantly suggesting I cfdump?? πŸ˜†
🀣 2
🀷 1
b
I just got a mental image of Homer Simpson at this job at the nuclear plant using A.I. to manage the reactor 😨
πŸ’₯ 2
g
AI will never tell you to do a cfdump, it will do javascript console.log to do CF dump, how dumb it is
b
Garbage-in-garbage-out (GIGO) didn't begin with Artificial Intelligence(AI). GIGO is as old as the field of computing itself. I should now mention what is perhaps the single most important word when using AI: prompting. In all probability, the unsatisfactory result that AI has been giving some of you is down to insufficient or improper prompting. In short, the quality of AI-generated responses depends heavily on how well you craft the prompts.
πŸ€” 1
πŸ’― 1
d
@BK BK Are you saying you get awesome results working with cf code? Care to share any really productive prompts?
b
LOL, hey Dave, I'm with you on that one! Yes, please show us how it's done BK, it would be appreciated.
m
I get what I'd consider decent results with it writing cf, using claude api. cline extension in vscode. using plan and act modes. I spend almost all my time in plan. https://docs.cline.bot/exploring-clines-tools/plan-and-act-modes-a-guide-to-effective-ai-development my custom instructions mostly describe any cf, js, css versions and libraries i'm using.
b
Copy code
@Dave Merrill : @BK BK
 Are you saying you get awesome results working with cf code?
No. I am saying something fundamental and general about working with AI, irrespective of the programming language or even of the professional area. You get much better results when you use prompts than when you don't.
Copy code
@Bill Nourse : please show us how it's done BK, it would be appreciated.
Sure, I can give you an example of a prompt for CF code. I of course need to know what you want the code to do.
d
Can you show a real-world success example of a task you needed to do, and the prompt you used? If it's reasonably compact and ok to show publicly, you could show the result too, but I get that that might not be possible.
a
I've used things like "For the ColdFusion component file currently open in my editor create test cases for the public methods using Testbox in a BDD style"
πŸ€” 1
Handy with really old code where the original developer didn't write any tests but you now need to change it
b
Copy code
@Dave Merrill: Can you show a real-world success example of a task you needed to do, and the prompt you used? etc.
Let me repeat my offer. Can you give me an example of a CFML task you need to do? Then I, or someone else, could suggest a prompt for you to use.
d
I saw and appreciated your offer. I was trying to short-circuit the effort both of us needed to make, by asking about some task you already did.
m
@Dave Merrill I would say the simplest/most intuitive way I've found so far is to use cline extension in vscode. with it, instead of trying to construct your prompt, just carry on a requirements gathering conversation in cline's plan mode; something like I would do if I was asking another person to build something. when it seems like it understands what i'm looking for, i switch over to act and it sends what we've decided on. it works kinda like plan mode is using ai to build the full prompt to pass to act, so far, I'd say it goes way faster, uses fewer tokens, and results in something closer to my expectations than I used to get trying to prompt engineer.
d
@Matt Jones Noted, thanks. I'm currently using IntelliJ IDEA, not vscode, but I think everyone else on my team is using vscode, so they could try this. I'll pass along the recommendation. We also have some restrictions on AI, will need to investigate whether that extension would be allowed.
m
@Dave Merrill I use it with claude, but it supports a pretty big list of ai providers. here is an article that you might find useful https://addyo.substack.com/p/why-i-use-cline-for-ai-engineering
πŸ‘ 1