Around the world, cyber crime is on the increase. By 2050, smart cities and houses are going to be the norm. Your fridge will tell a drone to choose up fresh milk when you run out, and lampposts will change the intensity of street lights when smartphones and thus humans are nearby. during this hyperconnected world, the threat cyber-criminals pose will only grow. within the home, smart devices, including voice assistants, vacuum cleaners, and toilets, will be easy pickings for hackers. While these poorly secured gadgets don’t store sensitive data themselves, they link to others that do and are vulnerable access points for criminals.
Meanwhile, AI could be a double-edged sword. While AI systems can help to spot incoming threats, attackers could also use them to unearth vulnerabilities. In future, foreign powers or cyber-criminals could cripple a country’s electricity network by taking on the AI that controls it, or they might stop the installation or maybe cause traffic chaos on the roads. AI could also help identity fraudsters by generating deepfakes.
These digital doppelgangers are currently a novelty, but the techniques accustomed create them are rapidly improving, with increasingly realistic depictions of individuals. So far, they’ve mostly been used as a part of revenge pornography. But a sensible digital avatar could be a useful protect wheedling key details like passwords or bank details out of targets. But the struggle to stay systems secure isn’t hopeless. By 2050, much poorly written and outdated code are removed and replaced by safer alternatives. Even passwords might be phased out, obsolete and less secure than biometric authentication software. Deepfakes may well be beaten by constant verification systems that track eye movement, faces, and keystrokes to confirm the person responsible the keyboard is who they claim to be. But that risks putting cybersecurity over privacy.
Policymakers and designers must push to make sure that this doesn’t become a binary choice.