The Robots are coming (Page 129) Fractured fairy tale: One Day people might Look Back And Say too Them Selves , I wish i Hadn"t of Used that Alexia and Hello Google What Time is it , Allowing them Too Collect all the Data That Will Be Used Against them In the Future ![]() (Edited by Fractured fairy tale) drive2succeed: People are afraid of losing jobs to robots and AI, but consider this: those trucks that will be driven automatically soon, they still need to be repaired by people. And the self-driving technology will probably be assembled by a human in a factory somewhere. So you lose 1 job and gain 2. Do the math. ghostgeek: CAIS says "enfeeblement" is when humanity "becomes completely dependent on machines, similar to the scenario portrayed in the film WALL-E". If you need a reminder, humans in that movie were happy animals who did no work and could barely stand on their own. Robots tended to everything for them. Guessing whether this is possible for our entire species is crystal-ball-gazing. But there is another, more insidious form of dependency that is not so far away. That is the handing over of power to a technology we may not fully understand, says Stephanie Hare, an AI ethics researcher and author of Technology Is Not Neutral. Think Minority Report, pictured at the top of this article. Well-respected police office John Anderton (played by Tom Cruise) is accused of a crime he hasn't committed because the systems built to predict crime are certain he will. "Predictive policing is here - the London Met uses it," Dr Hare says. In the film, Tom Cruise's life is ruined by an "unquestionable" system which he doesn't fully understand. [ https://www.bbc.co.uk/news/technology-65786964 ] (Edited by ghostgeek) ghostgeek: So what happens when someone has "a life-altering decision" - such as a mortgage application or prison parole - refused by AI? Today, a human could explain why you didn't meet the criteria. But many AI systems are opaque and even the researchers who built them often don't fully understand the decision-making. "We just feed the data in, the computer does something…. magic happens, and then an outcome happens," Dr Hare says. The technology might be efficient, but it's arguable it should never be used in critical scenarios like policing, healthcare, or even war, she says. "If they can't explain it, it's not okay." [ https://www.bbc.co.uk/news/technology-65786964 ] Lumpenproletariat: What's the worse case so far of robots taking over and doing harm to society? Other than just replacing some workers who were made worse off? Society is made better off if that job is done better by the robots. Even if those workers lost their jobs. And all that matters is for all of us, all society, to be made better off. So, where's a case where the robots, AI, more tech, etc. made society worse off? Worst case so far. (Edited by Lumpenproletariat) Lumpenproletariat: ghostgeek: Think Minority Report, pictured at the top of this article. Well-respected police office John Anderton (played by Tom Cruise) is accused of a crime he hasn't committed because the systems built to predict crime are certain he will. ____________________ Let's assume that ability to predict crimes will happen. In some ways we already have it, with prevention technology, cameras, face-ID, monitoring individuals. There is no threat as long as the system only investigates the suspects (suspected of a future crime) and requires some preventive steps. Why not require some preventive measures, extra monitoring of those suspects. There has been some of this already. Restricting weapon ownership by those suspects. What's the worst case of that already? How do you know the technology made a mistake in a particular case? Why couldn't any such mistake be corrected by still further AI and improved technology? ghostgeek: The debate isn't about the present but what the future might hold. That's where the scares come from; looking at what exists now and making an educated guess about the future. ghostgeek: Artificial intelligence is coming for your job - whether you like it or not. It could be the plot from a Hollywood movie, as the machines are on the rise and taking work away from humans on an ever-increasingly large scale. Earlier this month, Microsoft announced they are laying off 10,000 workers to cut costs, while making a "multiyear, multibillion dollar investment" in the artificial intelligence startup OpenA. OpenAI has free writing tool, ChatGPT, which is taking the world by storm due to its ability to respond to a range of queries with human-like text output. "AI is replacing the white-collar workers. I don’t think anyone can stop that," said Pengcheng Shi, an associate dean in the department of computing and information sciences at Rochester Institute of Technology. "This is not crying wolf," Shi warned the New York Post. "The wolf is at the door." Experts are warning that many well-paid workers are set to be left vulnerable, with many companies turning to ChatGPT. The amazingly intelligent chatbot is making waves in a number of industries, but there are already a lot of concerns. [ https://www.mirror.co.uk/news/us-news/jobs-new-ai-technology-chatgpt-29066279 ] ghostgeek: AI could be in our boardrooms and even courts. ChatGPT recently passed law exams in four courses at the University of Minnesota, performing on average at the level of a C+ student. It also managed to pass the final exam of an MBA programme designed for Pennsylvania's Wharton School. During a study, the AI did an "amazing job at basic operations management and process analysis questions including those that are based on case studies". The study noted that the AI displayed a "remarkable ability to automate some of the skills of highly compensated knowledge workers in general and specifically the knowledge workers in the jobs held by MBA graduates including analysts, managers and consultants". Although it did make some "surprising mistakes" with basic maths, so not time to panic just yet. [ https://www.mirror.co.uk/news/us-news/jobs-new-ai-technology-chatgpt-29066279 ] ghostgeek: AI has been used to diagnose a patient in just seconds. Dr Prithvi Santana, who recently graduated from UNSW, was experimenting with ChatGPT and gave it simple medical prompts and information that the bot was able to turn into a diagnosis. "ChatGPT might just take my job as a doctor,' Dr Santana said in the video. The scariest part, is I gave it a patient history with nuances it needed to integrate... and it diagnosed the patient for me." He claimed he was so excited by its functionality that his nose started bleeding, but it wasn't able to help there. [ https://www.mirror.co.uk/news/us-news/jobs-new-ai-technology-chatgpt-29066279 ] ghostgeek: AI is also coming for pictures, with OpenAI launching a tool, DALL-E, which can generate tailored images from user-generated prompts on command. You can asked the tool to create any image - even asking for a portrait in the style of Picasso, for example. However, it reportedly struggles with more nuanced techniques. And has got into trouble already with copyright issues. Website designers and engineers are also at risk as AI can draft code to build sites and other pieces of IT. "As time goes on, probably today or the next three, five, 10 years, those software engineers, if their job is to know how to code … I don’t think they will be broadly needed," Shi said. [ https://www.mirror.co.uk/news/us-news/jobs-new-ai-technology-chatgpt-29066279 ] kittybobo34: I have already seen AI generated images of politicians doing and saying things that didn't happen ghostgeek: Is that so? Well, you know what will happen next. Those politicians, when caught with their trousers down, will claim it's just AI fakery. | Politics Chat Room 24 People Chatting Similar Conversations |