March 4, 2024
A.I

Forcing AI on developers is a bad idea that’s going to happen • The Register

Opinion There is one thing that companies do, pathological behavior that dissatisfies customers and makes things worse in general. It is so widespread and long-lasting that it should have its own name, as an unpleasant medical condition. It doesn’t, but you’ll recognize it because it’s ruined your life quite often – it’s the unwanted new feature.

Hacker’s Dictionary, with its roots in the 1970s, came close to “progressive characterism,” but that doesn’t really reach the malignancy of which the effect is capable. It can disrupt your muscle memory for daily tasks, it can annoy you by insisting that you explore and enjoy its new goodness, and it can even make you relearn a task you’ve already mastered. Bonus points if it’s difficult to disable or shut up, and double points if that’s impossible.

However, for maximum effect, it should make your job impossible and force you to abandon a tool you base your professional life on. Take a bow, JetBrains, IDE maker. Introducing a non-removable AI Assistant plugin into developers’ daily work lives is such a bad idea that there’s a good chance the whole class of phenomena could be called JetBrains Syndrome.

This has absolutely nothing to do with the quality of the AI ​​assistance offered, and all to do with understanding the practical aspects of being a developer. Every user of JetBrains products is under time pressure to produce working code as part of a project. There will be time to learn about new features, but that time is not today. If you have an ounce of common sense or a microgram of experience, the time for new features is when you’re ready, probably after others have come in and kicked the tires. That AI assistant can be fabulous, or it can be an intrusive, buggy time sink that imposes its own ideas on your work. AI assistants have a way of doing that.

It also has nothing to do with whether the plugin is inactive and won’t do anything until activated, as JetBrains says, nor does it matter that it doesn’t export code to unknown places for AI learning purposes. It could be triggered in the future, or its behavior could change: that won’t matter if the code simply isn’t there in the first place. That’s not an option. For developers who want to research AI responsibly, on their own time, that’s a red flag. However, it’s not as red and flashy as introducing an AI module into a development environment used in companies with strict “No AI Coding” policies. ‘Yes, there is AI, but trust us, is it turned off?’ Who would want to be the developer who had to make that argument to management?

This is just strange. The developers at JetBrains themselves are, well, developers. They will have direct experience of the pressures and factors of dev life that make non-optional characteristicism such a stinky idea. Think about security policies, intellectual property risk, and code quality. It’s the same unsolvable paradox that says everyone involved in making customer service such a horrible experience has to experience horrible customer service. Why don’t they do their bit better?

The kinder answer in the case of JetBrains is that, due to lack of consultation, knowledge or foresight, it simply did not know that AI policies did not exist in some corporate development teams. That’s kinder than “we knew it, but marketing made us do it.” So, let’s assume that AI assistance in development is not just marketing: how can companies like JetBrains, and indeed everyone who works in creating software, make no-AI policies unnecessary?

In the words of android hunter Rick Deckard, played by Philip K. Dick, “Replicants are like any other machine: they’re either a benefit or a danger. If they’re a benefit, they’re not my problem.” Verse on the reality-warping nature of AI, PKD is your go-to visionary. We don’t know where developmental AI fits on that scale, but we can change the landscape to help us figure it out.

Be concerned that using AI code means using code that the developer does not fully understand, with implications for security, reliability, and maintainability. True, but the AI ​​code is not automatically worse here than some of the things that are entirely the work of carbon-based life forms. Cut-and-paste, use of external functions, and libraries that aren’t fully understood and “seem-to-work” are all culprits here, and we have techniques to deal with them, such as audits, walkthroughs, and the horror of properly policed ​​documentation protocols. . AI code won’t get a magic pass here.

The specter of a new intellectual property infringement law is good. Lawyers use it in bedtime stories to scare their children away from the nightmare of becoming programmers. But here, development AI has a chance to get ahead of its generalist cousins, as the training data sets will be predominantly open source. That does not eliminate the problem, but it is a much more permissive and transparent world than text or image corpora, and an organization that allows the use of open source code will have an easier time digesting AI trained solely on open source.

The final problem with anti-AI policies is, as Philip K. Dick pointed out so many times, if you can’t distinguish between what is natural and what is artificial, is there any meaningful difference? We can’t tell. Is there AI-generated code in the systems you’re using to read this right now, in the browser, in the cloud, or on-premises? In the enormously dynamic and interrelated mass of components that will make up computing in 2024, demanding the purity of a single component may be unlikely. If it really matters, then we should look back at generalist AI systems, which are embarking on incorporating fingerprints into AI-generated content that marks it as such. Fairly easy to do in source code, much harder in executable. We should? We could? Is it even ethical not to do so?

Whether you use it or not, AI code will increasingly be part of the environment in which your product will exist, so establishing rules to identify and manage it is the most important foundation. You can’t freeze, you can’t force, you can’t be ambiguous. As JetBrains has just very helpfully discovered. ®

Leave a Reply

Your email address will not be published. Required fields are marked *