Software developer here (actually game programmer, but the fundamentals are the same since programming is programming)
The short answer is, AI is largely a scam in the sense that people hype it up as being the future of computing and that it's going to change the world. AI has extremely limited use cases and we are quickly discovering that it's really not all that applicable to everyday applications.
The long answer is, AI is a whole complicated mess that needs to really be unraveled to actually understand why it's such a worthless idea.
By fundamentally understanding it, it will also make sense why AI is not taking over the software development space. In fact, most software developers really don't care about advancements in AI at all.
So here's the simple version:
AI is a complete misnomer. What we call "AI" now is not actually any form of intelligence. In fact, it's the complete opposite - it's doing large volume data processing which uses completely different processing methodologies than conditional logic, which is why it's so often done on GPUs. Machine Learning could not be further from decision making.
What it is actually doing is essentially generating data that fits a pattern derived from previous data. That's it. There's nothing special about it, and the secret people don't want you to know is that it's frequently wrong. But it's wrong in a way that feels right.
In much the same way that some blur can trick our eyes into thinking an image patch job is much higher quality than it actually is, Machine Learning does a good job of "fudging" the data to make it look acceptable to us.
For things like large-volume stock trading, which is where ML is largely being applied, it mostly seems to work, specifically because the stock market is essentially magic and nobody really seems to understand it. When they try ML models and they get positive results, nobody really knows why, and the entire thing might just be placebo since large-scale trading software has been working successfully for decades.
Every time a ML model undergoes real scrutiny where we actually know the results of the expected data upfront, it fails.
When it comes to things that are a lot less objective, it can generate results that look okay (like emulating a voice or upscaling an image).
Machine Learning will, at best, always be niche, and will never have any use for real data processing. It will only ever be able to generate fuzzy data based on a series of inputs.
For typical business cases and business logic for business applications (every Thursday we need to apply a 10% discount to customer orders over $300 if they have bought 4 items in the last month, etc), AI/ML is completely useless. For typical game development applications, AI/ML is completely useless. For the vast majority of programming tasks, ML is completely useless.
If you need to process a large dataset and you want to generate something that will somewhat match the original dataset in a way that looks about right but isn't intended for a high degree of scrutiny, ML is the right tool for the job. For anything else, a standard algorithm will serve you better.
THE GOOD NEWS IS, we have largely nothing to worry about in terms of job automation, evil robots, or anything really serious happening with AI/ML. Job automation happens for the same reason all automation happens - a particular job becomes definable as an algorithm, at which point a computer can do it faster and more accurately since algorithms are what computers are good at. AI/ML is far less likely to take away your job than a bog standard program written by some nerd that defines your job as a series of steps and performs them efficiently. No, AI is not going to take your job.
I love everyone here, so I'm going to let you all in on a little secret: Automation depends entirely on how easily definable your job is. Which means, the best way to keep your job is to work in a field where jobs are ill-defined, such as management, advertising or marketing. Jobs where the day-to-day work are follow a very well defined process are screwed, and not because of AI, but because those jobs are easy to define as a series of steps for a computer to perform. It doesn't matter how worthwhile or valuable your job is, if your work is largely objectively definable, you're in trouble. The irony of this is that if we automate everything, many of the jobs left will be the largely useless ones - sales, management, human resources, etc. Although not all hard to define jobs are useless - programmer, lawyer, doctor, really any job that is sufficiently complicated as to require significant human input and decision making rather than being process-driven. I'm honestly surprised "Fast Food Worker" hasn't already been replaced by machines everywhere - that job is nothing but following a process. For the most part, fast food workers are essentially acting as slow, fallible computers.
Depending on your perspective, this can actually be a good thing. While it sucks in the short term to lose your job, in general it pushes society to create more meaningful, intelligent jobs. Nobody works copying books anymore, now we just print them. Nobody is lamenting the poor book copiers, because it was a largely thoughtless, error prone, menial job. It's going to suck for fast food workers to lose their jobs in the short term, but if it means we as a society can move past the demeaning and unfulfilling job of "fast food worker", and people can pursue something more meaningful as a result, then we are all better off.
Gonna linkdump since this is the current AI thread.
People are paying attention to this "trend". It's not just a fad, it's a fundamental existential shift.
Since the politicization of artificial intelligence is inevitable, now more than ever, each one of us should evaluate its potential impact on our respective nations— and be prepared for what is to come!
www.forbes.com
The market for artificial intelligence grew beyond 184 billion U.S.
www.statista.com
AI-based surveillance systems offer employers a quantum leap in observational power when it comes to monitoring their staff
www.raconteur.net
This Article offers a novel perspective on the implications of increasingly autonomous and "black box" algorithms, within the ramification of algorithmic tradin
papers.ssrn.com
View: https://youtu.be/Mqg3aTGNxZ0
It's why I find links like these so stupid. People get all up in arms about how "AI is revolutionising surveillance" but in reality, surveillance is an industry where the status quo is to assume guilt first and investigate later. They will see a bunch of initial hits and go "wow this system is working great", but will only realise later when most of those hits are false positives, that AI really hasn't done anything concordant with reality - instead it's spat out a result based on a set of inputs with no concern for whether that data is valid or not. AI may have some relevance for upscaling security footage (although whether that is admissable evidence is sketchy since the results aren't exactly verifiable), but for things like facial recognition you're better off using
other algorithms
It's a fad in most cases. It will die out soon, except for in the very specific cases where it's applicable, which will be image touch ups and other areas where objectively verifiable data correctness does not matter very much.