Low-code is dead. Long live low-code.
The growing relevance of Low-code & No-code platforms in the Wild Wild West of AI

Low-code/ No-code (LCNC) platforms have become one of the hottest topics in enterprise technology. The ability to write little to no code opens up possibilities for businesses to build software and automate processes like never before.
With LCNC, the speed of development and deployment, stability of the software, protection of data, security, information and process governance, and scalability are all included by design.
However, with the recent explosion in the release of AI tools and technologies, and the deafening volume of online chatter about it, mostly thanks to Chat-GPT and GPT4 in particular, there is a growing debate if LCNC is still relevant and worth the time and investment of the businesses.
The short answer? Absolutely!
Not only LCNC is still relevant, but I would argue that it’s more important now than ever to invest in LCNC and especially in platforms that have a clear roadmap to roll out their own version of Generative AI or incorporate the existing ones into their platforms.
The low-code/ No-code landscape is changing fast and for the better. We are about to experience new ways of rapid application development with the help of machines. Also, users will soon have new ways of interacting with apps build on LCNC platforms to maximize productivity.
Will Chat-GPT, and Generative AI in general, disrupt the LCNC landscape? Not anytime soon. For now, GPT alone cannot replace any of the respectable LCNC platforms. And maybe it shouldn’t.
There are several obvious reasons for this, and none of them are doomsday scenarios. I’ll leave those to your imagination.
AI is still in its infancy
Yes, Chat-GPT became the fastest-growing technology in the history of humanity in terms of userbase, but it is still a kicking screaming and pooping baby.
Apart from the staggering speed at which it is growing, the reliability of the work produced by AI is still very spotty. It is often cute and playful, but we shouldn’t let AI run our business yet.
It’s complicated
The herd of AI gurus living in Twitterama will tell you that “99% of the people don’t know how to do X and Y with AI, …” and that all you need to do to become an AI expert yourself is to read their thread and hit on Follow. But in reality, especially if you’re planning to do something serious, such as building a business application that does something meaningful, you will need a strong background in a number of technologies.
The promise of LCNC is to abstract complexity and help non-techies to build functional business software. So far, GPT4 is not able to wield a wand and make it happen for you by reading your mind, or your half-baked prompt. You need to do a lot of work yourself.
So, if you are a business leader or a knowledge worker who is not into programming, a reliable LCNC platform is still your best friend. You can always connect it to AI via built-in integrations or API if you need to.
It’s shiny, but it is still messy out there
It is never a good idea for a typical business to live on the bleeding edge of technology. The unknown factors are so many to risk the smooth running of your business for some seemingly revolutionary feature or technology. Also, the risk of inadvertent loss or misuse of data is high.
The dust has to settle and the implications of using “democratized” AI have to be quantifiable for businesses to use AI with peace of mind.
This also calls for clear and practical policies and procedures within organizations for all the stakeholders to know with confidence how they will interact with AI and how their data and their customers' data will be used in the process.
PEBCAC
People tend to add to the mess too. Our workforce is yet to keep up the pace in this brave new world or has only obtained dangerously little knowledge about the applications and ramifications of AI that might give them a false sense of confidence to use it as they see fit.
That’s why training humans for practical and responsible use of AI is at least as important as training AI models. People need to know what it is that they are dealing with and what it means for their future to live in a work environment shared by Spaiens and Cyborgs.
In a recent incident, Samsung employees accidentally leaked trade secrets thanks to their unrestricted and overenthusiastic use of Chat-GPT. This can be attributed to the fact that the employees were unaware of the sort of fire they were playing with and that there were no “firewalls” in place to control the people or AI.
Another point worth mentioning is that businesses need to be careful not to overwhelm themselves or their employees by throwing new things at them on daily basis. This will lead to anxiety and discouragement partly due to the fear of becoming irrelevant.
A robust LCNC strategy is inclusive and will help everyone to grow together and more steadily within the safety of a controlled environment.
Privacy, anyone?
Privacy and protection of personal data is a major concern for any business that cares about its own and the well-being of its customers. It is both a business risk and an ethical question to take privacy lightly.
Storage of customers’ personal information without adhering to strict privacy policies and procedures is a current problem to grapple with in today’s business. Therefore feeding sensitive information to the data-hungry AI baby whenever it screams louder should be avoided or meticulously controlled.
We still don’t have a complete grasp on the scope and scale of the information that was fed into GPT to make it sound like an oracle and now that OpenAI has a new capitalist master, we might never find out.
We also know little to nothing about how the data in the prompts and generated results that potentially have your sensitive information in them are being used (or misused) by AI or humans don’t report to you.
Feeding company data indiscriminately to an AI (that you don’t own and don’t know the inner workings of) is not a sound idea as it could open Pandora’s box of personal data, privacy, and security risks.
Intellectual property, probably?
An uncomfortable debate about who owns the rights to the capabilities of AI is gaining momentum in art and tech communities. The key to the magic of Generative AI until now is to train it on copious amounts of data, but where does this data come from? Are businesses at risk if they use tools that gained their capabilities by potentially infringing copyright laws in a good number of countries?
In the case of Chat-GPT, the main source of the training data was the internet (until September 2021 as of GPT4). This means the vast majority of this data is copyrighted and owned by intellectuals, artists, businesses, and other individuals alive or deceased.
Now, who owns the rights to all this AI-generated information (or art)? Is it the people who created the wealth of human knowledge on which the AI was trained? Is it the people who created the AI? Is it us the end-users? Or, is it the AI itself?

Until and unless these pressing questions are answered and the jurisdiction and ownership of such data are clarified by the law, businesses should stay away from free-feeding of organizational data into any AI and banking on magical results.
LCNC as we know it will remain quite relevant in the foreseeable future. The recent developments in AI will only empower low-code and no-code development by speeding up the software development time and access to new capabilities within the safe confines of LCNC.
The maturity and steady growth in capabilities and adoption, perhaps, are some of the key factors why businesses will stick to LCNC. In larger companies, “legacy” systems from decades ago are still fully operational managing most of the enterprise data and processes. LCNC often sits on top of the legacy systems improving extensibility and abstracting complexities.
In choosing the right LCNC platform it is crucial to pick from those which have the plan to integrate Generative AI safely and are made future-proof. Albeit, future-proofing anything these days seems to be an increasingly challenging mountain to climb.
As for controlled testing environments, you can go nuts, given your company provides a safe and secure sandbox for you to play in. One point to keep in mind though is to be extra cautious when you are playing with data that your company doesn’t own or doesn’t have the right to use outside the permitted scope.
It’s exciting out there and surely overwhelming to keep up with the speed at which technology is changing. We should drive down the path of adoption with the lights on and see this journey as an opportunity to learn new possibilities. Using our learnings responsibly is fundamental to staying relevant and to the longevity of our businesses.
Have a safe journey.
P.S. Binge on these
If you like to learn about Generative AI, this fairly digestible video could help:
If you want to lose sleep over how fast things are going down the drain and it will get worse if AI development and use are not closely monitored and regulated, watch this sobering presentation: