2 Comments
User's avatar
Binaryb0i's avatar

What's the point of the human engineer in a scenario where a code-criticizing self-conscious AI exists? Just let the AI write the code...

Expand full comment
Hussein Lezzaik's avatar

That's a great question! There are two things to consider: (1) in the short term technology with models we currently have like GPT-3.5, it is possible to build something similar to what I described in this blog. There's already a demo project built from the ScaleAI hackathon with the code open sourced on github. Now it's unclear to me how abstract are these representations that these models are learning, i.e. it might be able to know it's source code but at this moment these models don't know how to learn by themselves and improve. Yet they're very good when prompted what to do. (2) The idea of what's the point of human engineer is simple, products and businesses are built around human needs, even if AI writes the best code we would still need human "engineering managers" at least to oversee what the models are writing, and make sure the whole software development cycle is up to the standards of the team. It's like having multiple 10x software engineers within reach, how would you manage these resources? Or to put it in Sam Altman's words: "AGI is on demand intelligence". Let me know if that answers your question!

Expand full comment