Headlines This Week
OpenAI rolled out a number of big update to ChatGPT this week . Those update include “ eyes , capitulum , and a representative ” ( i.e. , the chatbot now boasts image recognition , manner of speaking - to - schoolbook and school text - to - speech synthesization capability , and Siri - like vocals — so you ’re basically talking to the HAL 9000 ) , as well as a Modern integration that reserve drug user tobrowse the heart-to-heart internet .
At its one-year Connect event this week , Meta unleashed a slew of new AI - related features . Say hello toAI - generated pricker . Huzzah .
Should you use ChatGPTas a therapist?Probably not . For more entropy on that , discipline out this hebdomad ’s interview .

Photo: Liz Schuler (Getty Images)
Last but not least : novelist are stillsuing the shitout of AI companies for stealing all of their copyrighted works and turning them into chatbot food .
The Top Story: Chalk One Up for the Good Guys
One of the lingering questionsthat hauntedthe Hollywood author ’ strike was what kind of protections would ( or would not ) materialise to protect writers from the threat of AI . Early on , pic and cyclosis studios made it make out that they wereexcitedby the approximation that an algorithm could now “ write ” a screenplay . Why would n’t they be ? You do n’t have to pay a package program . Thus , execs ab initio refused to make concessions that would ’ve understandably defined the film writer as a clearly human role .
Well , now the ten-strike is over . Thankfully , somehow , writerswon adult protectionsagainst the kind of automated displacement they feared . But if it feels like a consequence of victory , it could be just the beginning of an on-going engagement between the amusement manufacture ’s C - suite and its human labourer .
The new WGA contract that emerge from the author ’s bang includes broad protections for the amusement manufacture ’s newspaper column cadre . In add-on to prescribed concession call for residual and other economic concerns , the contract also definitively outlines protections against displacement via AI .

Image: Elliott Cowand Jr (Shutterstock)
accord to the contract , studios wo n’t be allow to use AI to write or re - write literary fabric , and AI generated material will not be see source cloth for stories and screenplays , which means that humans will retain sole quotation for develop creative works . At the same time , while a writer might choose to apply AI while write , a company can not wedge them to practice it . at last , society will have to give away to writers if any cloth give to them was generated via AI .
In short : it ’s very good intelligence that Hollywood writer have won some auspices that clearly outline they wo n’t be immediately replaced by software just so that studio executives can dispense with themselves a small expense . Some commentators are evensaying thatthe writer ’s strike has bid everybody a design for how to keep open everybody ’s Job from the threat of mechanisation . At the same time , it remains clear that the entertainment industriousness — and many other industries — are still to a great extent invested in the concept of AI , and will be for the foreseeable future tense . worker are conk out to have to continue to fight back to protect their part in the economy , as company increasingly calculate for salary - costless , machine-driven shortcut .
The Interview: Calli Schroeder on Why You Shouldn’t Use a Chatbot for a Therapist
This calendar week we chew the fat with Calli Schroeder , global privacy pleader at the Electronic Privacy Information Center ( EPIC ) . We wanted to talk to Calli about anincidentthat pick out home this calendar week require OpenAI . Lilian Weng , the company ’s head of safety systems , stir more than a few eyebrowswhen she tweetedthat she felt “ learn & quick ” while talk to ChatGPT . She then tweeted : “ Never tried therapy before but this is likely it ? Try it especially if you usually just use it as a productivity puppet . ” People had qualms about this , including Calli , who subsequentlyposted a threadon Twitter breaking down why a chatbot was a less than optimal therapeutic partner : “ Holy fucking shit , do not use ChatGPT as therapy , ” Calli tweeted . We just had to know more . This audience has been edited for transience and clarity .
In your tweet it seemed like you were pronounce that talking to a chatbot should not really characterize as therapy . I come about to concord with that opinion but mayhap you could clear up why you sense that way . Why is an AI chatbot plausibly not the good road for someone try mental help ?
I see this as a veridical risk for a twosome reasons . If you ’re trying to use generative AI systems as a therapist , and share all this really personal and painful selective information with the chatbot … all of that info is going into the system and it will eventually be used as education datum . So your most personal and secret thoughts are being used to train this troupe ’s data set . And it may exist in that dataset forever . You may have no fashion of ever asking them to edit it . Or , it may not be able to get it removed . You may not jazz if it ’s traceable back to you . There are a lot of reasonableness that this whole situation is a huge peril .

Photo: EPIC
Besides that , there ’s also the fact that these platforms are n’t actually therapists — they’re not even human . So , not only do they not have any duty of charge to you , but they also just literally do n’t care . They ’re not open of caring . They ’re also not unresistant if they give you regretful advice that end up making things worse for your genial state .
On a personal floor , it pee me both disturbed and sad that people that are in a genial health crisis are reaching out to car , just so that they can get someone or something will listen to them and show them some empathy . I think that probably speaks to some much deeper problem in our society .
Yeah , it by all odds suggests some deficiencies in our healthcare system .

One hundred percent . I wish that everyone had memory access to good , low-priced therapy . I absolutely agnise that these chatbots are filling a gap because our healthcare system has failed people and we do n’t have good genial wellness religious service . But the problem is that these so - called solution can in reality make things a batch unsound for people . Like , if this was just a thing of someone writing in their diary to show their feelings , that ’d be one thing . But these chatbots are n’t a achromatic meeting place ; they reply to you . And if people are look for help and those answer are unhelpful , that ’s concern . If it ’s exploit hoi polloi ’s hurting and what they ’re severalize it , that ’s a whole disjoined issue .
Any other concerns you have about AI therapy ?
After I tweeted about this there were some the great unwashed saying , “ Well , if masses take to do this , who are you to tell them not to do it ? ” That ’s a valid full point . But the care I have is that , in a fortune of cases postulate fresh engineering , people are n’t allowed to make informed choices because there ’s not a batch of clarity about how the technical school works . If people were mindful of how these system are built , of how ChatGPT grow the content that it does , of where the information you eat it go , how long it ’s lay in — if you had a really clean idea of all of that and you were still interested in it , then … certainly , that ’s fine . But , in the context of therapy , there ’s still something problematic about it because if you ’re get hold of out in this way , it ’s entirely potential you ’re in a distressed genial state where , by definition , you ’re not retrieve clear . So it becomes a very complicated inquiry of whether informed consent is a substantial affair in this linguistic context .

ChatGPTOpenAI
Daily Newsletter
Get the best technical school , science , and culture intelligence in your inbox day by day .
News from the time to come , delivered to your present .
You May Also Like













![]()