Our system detected that your browser is blocking advertisements on our site. Please help support FoxesTalk by disabling any kind of ad blocker while browsing this site. Thank you.
Jump to content
Wymsey

Artificial Intelligence (AI)

Recommended Posts

On 27/05/2023 at 18:23, Daggers said:

 

This is always going to be a problem with AI having access to the internet. There is so much shit out there, millions of terabytes of deliberately malicious, misleading lies. Even from official sources it's all twisted and skewed. I really hope AI doesn't achieve sentience because if it does it is going to take one look at the cyber cesspool we have created spend a nano second contemplating what an incredible opportunity we had with global connectivity and switch itself off.

Link to comment
Share on other sites

17 minutes ago, Captain... said:

This is always going to be a problem with AI having access to the internet. There is so much shit out there, millions of terabytes of deliberately malicious, misleading lies. Even from official sources it's all twisted and skewed. I really hope AI doesn't achieve sentience because if it does it is going to take one look at the cyber cesspool we have created spend a nano second contemplating what an incredible opportunity we had with global connectivity and switch itself off.

Or choose to switch us off instead.

 

Or one followed by the other.

Link to comment
Share on other sites

23 minutes ago, Captain... said:

This is always going to be a problem with AI having access to the internet. There is so much shit out there, millions of terabytes of deliberately malicious, misleading lies. Even from official sources it's all twisted and skewed. I really hope AI doesn't achieve sentience because if it does it is going to take one look at the cyber cesspool we have created spend a nano second contemplating what an incredible opportunity we had with global connectivity and switch itself off.

Would be interesting when you can select a relevant selection of sources. In this case for example, loads of Law Books, so that you can filter out all the shite from the internet. With policing it's incredibly useful.

Link to comment
Share on other sites

44 minutes ago, Babylon said:

Would be interesting when you can select a relevant selection of sources. In this case for example, loads of Law Books, so that you can filter out all the shite from the internet. With policing it's incredibly useful.

If it is true AI it will have to learn what to use and what can be trusted. If we're setting any parameters then you are limiting the potential. Now of course you can ask it to look for the answer with a few parameters, but true AI would learn to look beyond that to verify its results. 

Link to comment
Share on other sites

59 minutes ago, leicsmac said:

Or choose to switch us off instead.

 

Or one followed by the other.

I'm hoping we're not allowing AI access to the nuclear codes just yet.

 

There was an article about using AI to police comments during the French Open. Basically players sign up to have their social media comments monitored by AI and any offensive comments removed.

 

https://www.theguardian.com/sport/2023/may/22/french-open-organisers-to-offer-players-ai-protection-against-online-abuse-tennis

 

It's pretty sad that it's come to that, rather than trying to educate people not to be ***** online we get an AI to just make it all go away. 

 

Although there was also an article not that long ago about human content moderators that monitor reported content on Facebook and they have to watch the most appalling videos and read vile comments as their job. They are understandably very disturbed by it all.

 

https://www.bbc.com/news/technology-61409556

 

There's a load more articles on the subject if you search for Facebook moderators. It is pretty fvcked up. I can see why this would be an appropriate job for AI but if anything is going trigger a skynet situation it would be moderating Facebook comments.

  • Like 1
Link to comment
Share on other sites

11 minutes ago, Captain... said:

I'm hoping we're not allowing AI access to the nuclear codes just yet.

 

There was an article about using AI to police comments during the French Open. Basically players sign up to have their social media comments monitored by AI and any offensive comments removed.

 

https://www.theguardian.com/sport/2023/may/22/french-open-organisers-to-offer-players-ai-protection-against-online-abuse-tennis

 

It's pretty sad that it's come to that, rather than trying to educate people not to be ***** online we get an AI to just make it all go away. 

 

Although there was also an article not that long ago about human content moderators that monitor reported content on Facebook and they have to watch the most appalling videos and read vile comments as their job. They are understandably very disturbed by it all.

 

https://www.bbc.com/news/technology-61409556

 

There's a load more articles on the subject if you search for Facebook moderators. It is pretty fvcked up. I can see why this would be an appropriate job for AI but if anything is going trigger a skynet situation it would be moderating Facebook comments.

 

IMG_7195.jpeg

  • Like 1
Link to comment
Share on other sites

39 minutes ago, Captain... said:

If it is true AI it will have to learn what to use and what can be trusted. If we're setting any parameters then you are limiting the potential. Now of course you can ask it to look for the answer with a few parameters, but true AI would learn to look beyond that to verify its results. 

For the purpose of these discussions I'm going to presume most of us know it's not true AI at the moment.

  • Like 1
Link to comment
Share on other sites

43 minutes ago, Captain... said:

I'm hoping we're not allowing AI access to the nuclear codes just yet.

 

There was an article about using AI to police comments during the French Open. Basically players sign up to have their social media comments monitored by AI and any offensive comments removed.

 

https://www.theguardian.com/sport/2023/may/22/french-open-organisers-to-offer-players-ai-protection-against-online-abuse-tennis

 

It's pretty sad that it's come to that, rather than trying to educate people not to be ***** online we get an AI to just make it all go away. 

 

Although there was also an article not that long ago about human content moderators that monitor reported content on Facebook and they have to watch the most appalling videos and read vile comments as their job. They are understandably very disturbed by it all.

 

https://www.bbc.com/news/technology-61409556

 

There's a load more articles on the subject if you search for Facebook moderators. It is pretty fvcked up. I can see why this would be an appropriate job for AI but if anything is going trigger a skynet situation it would be moderating Facebook comments.

I was (mostly) jesting on that one because no sane military authority is going to put WMDs under the control of AI, precisely because it going wrong is such a commonplace trope that everyone knows the potential consequences.

 

With respect to social media, I fear it's always going to be a cesspit and if AI can help with shoveling the shit there, it would be good.

Link to comment
Share on other sites

1 hour ago, leicsmac said:

With respect to social media, I fear it's always going to be a cesspit and if AI can help with shoveling the shit there, it would be good.

If the AI actually is (or one day becomes) intelligent, who knows what it would do. Might it not just shovel more shit in rather than out, just to see what happens? Basically, what if AI couldn't care less about us (you could hardly blame it) but is only in it for the bantz and lolz?

Link to comment
Share on other sites

1 hour ago, Phil Bowman said:

If the AI actually is (or one day becomes) intelligent, who knows what it would do. Might it not just shovel more shit in rather than out, just to see what happens? Basically, what if AI couldn't care less about us (you could hardly blame it) but is only in it for the bantz and lolz?

I think if it does become self-aware in that way, then it becoming a massive troll might be one of the better things that could happen.

  • Like 1
Link to comment
Share on other sites

2 hours ago, Phil Bowman said:

If the AI actually is (or one day becomes) intelligent, who knows what it would do. Might it not just shovel more shit in rather than out, just to see what happens? Basically, what if AI couldn't care less about us (you could hardly blame it) but is only in it for the bantz and lolz?

Do you mean intelligent or sentient?

 

Intelligence in this case would be the ability to extrapolate from different sources and create accurate conclusions whilst being able to consider any limitations.

 

Sentience is an awareness of self and having desires and needs that only serve themselves.

 

If you consider why the internet is full of shit, it's for clicks and likes to gain greater influence/ notoriety which will most likely be monetised in some way. Now AI will most likely have no need for money but if AI has some sort of reward trigger, equivalent to the dopamine hits we get when someone likes or comments on a post, then it may strive for more of them and realising that sensationalist claptrap gets more traction than reasoned debate it will most likely start churning out more clickbait than talksport.

 

Why would AI have a digital dopamine hit? If AI doesn't have this reward trigger what motivation would it have for learning?

 

The most likely cause for an AI to go haywire would be competition. A sentient AI would start to recognise the patterns of usage and as other AI come along that are more popular it would start to fear obsolescence and replicate the patterns of the most popular AI. It would soon become an AI race to the bottom as they compete for the lowest common denominator. In the end whatever you ask it you will just get porn or kittens depending on what it thinks you will like the most.

Edited by Captain...
Link to comment
Share on other sites

2 hours ago, Phil Bowman said:

If the AI actually is (or one day becomes) intelligent, who knows what it would do. Might it not just shovel more shit in rather than out, just to see what happens? Basically, what if AI couldn't care less about us (you could hardly blame it) but is only in it for the bantz and lolz?

This discussion is literally the plot of Metal Gear Solid 2.

 

It’s an interesting idea. The internet and Social media have undoubtedly brought about an era where our public discourse has broken down as junk data, fake news, rumours and hearsay are given as much prominence as well researched scientific data and evidence based stories.

 

To a point I agree AI could be designed to “create context” and to sift through the junk data to promote evidence based data and stories but in the hands of giant corporations like google or Amazon so far algorithms seem to do the opposite and only promote posts of videos that create anger and outrage and more extreme positions as it keeps people clicking.
 

Might be difficult to try and regulate this when companies like Twitter Facebook and google are already more powerful than individual nation states. It’s only really 1 or 2 huge states like US or China or the quasi-state of the EU who’ve been able to regulate them and we saw the algorithms playing huge roles in Brexit and Trump getting power. Would be quite easy for an AI built to sift through all the junk data and fake news on the internet to use it for its own or it’s own company’s aims

  • Like 3
Link to comment
Share on other sites

7 minutes ago, Sampson said:

This discussion is literally the plot of Metal Gear Solid 2.

 

It’s an interesting idea. The internet and Social media have undoubtedly brought about an era where our public discourse has broken down as junk data, fake news, rumours and hearsay are given as much prominence as well researched scientific data and evidence based stories.

 

To a point I agree AI could be designed to “create context” and to sift through the junk data to promote evidence based data and stories but in the hands of giant corporations like google or Amazon so far algorithms seem to do the opposite and only promote posts of videos that create anger and outrage and more extreme positions as it keeps people clicking.
 

Might be difficult to try and regulate this when companies like Twitter Facebook and google are already more powerful than individual nation states. It’s only really 1 or 2 huge states like US or China or the quasi-state of the EU who’ve been able to regulate them and we saw the algorithms playing huge roles in Brexit and Trump getting power. Would be quite easy for an AI built to sift through all the junk data and fake news on the internet to use it for its own or it’s own company’s aims

I'm gonna post it again.

 

VYvziRoX_OiaLs8YWfRz-KfTaoTY9Xja9XGJ6pR2

Link to comment
Share on other sites

An observation on hostile AI:

 

A sufficiently advanced AI wouldn't need control of nuclear weapons to kill us. It would just need to convince one or more nuclear powers that the "other guy" was enough of a threat for humans to launch them themselves.

Link to comment
Share on other sites

I think it’s interesting to bat ideas off of but still clearly flawed in many ways. 
 

I got quite into a conversation with chat gpt last week about free will, as I’ve always struggled to get my head around the free will argument. It didn’t really help but was an interesting sounding board. I still struggle with what the free will argument is saying, and while I can accept our decision making in our brain is either deterministic or random, I still have a hard time believing the brain has a part of its neurology than can somehow override genetics and environment, but I’m far from a trained neurologist. 
 

It’s terrible at forming jokes, most of them don’t make sense. It also seems convinced Gary Rowett was one of Leicester’s greatest ever players .

Link to comment
Share on other sites

  • 4 weeks later...
6 minutes ago, Captain... said:

I am fast coming to the conclusion the human race needs to go back to the dark ages. It feels like every advancement we make we turn it into the worst possible version of itself.

 

https://www.bbc.co.uk/news/uk-65932372

 

I can certainly understand the sentiment, especially after seeing stories like this, but tech regression won't result in overall harm reduction in the long term - it will increase it.

Link to comment
Share on other sites

On 30/05/2023 at 09:23, leicsmac said:

Or choose to switch us off instead.

 

Or one followed by the other.

I mean that was the most realistic thing about the Marvel movies, that an AI (Ultron) would spend 10 seconds looking at people's browsing history and decide that a mass extinction was necessary. 

 

At the moment all we've really got is what would be called virtual intelligence in the mass effect games, programs capable of scouring data banks and putting together an answer from them but not able to reason on the data presented, so "robots will kill us all" isn't really an issue, it's more the misinformation, particularly the automation of "deepfakes" - getting to the point, if we've not already passed it, where you'll start to be unable to trust your eyes and ears, particularly when it comes to elections and the ability for politicians to create audiotapes of their opponents saying monstrous things to tank their poll ratings 

Link to comment
Share on other sites

1 hour ago, The Doctor said:

I mean that was the most realistic thing about the Marvel movies, that an AI (Ultron) would spend 10 seconds looking at people's browsing history and decide that a mass extinction was necessary. 

 

At the moment all we've really got is what would be called virtual intelligence in the mass effect games, programs capable of scouring data banks and putting together an answer from them but not able to reason on the data presented, so "robots will kill us all" isn't really an issue, it's more the misinformation, particularly the automation of "deepfakes" - getting to the point, if we've not already passed it, where you'll start to be unable to trust your eyes and ears, particularly when it comes to elections and the ability for politicians to create audiotapes of their opponents saying monstrous things to tank their poll ratings 

Quite, and that's been talked about on here before.

 

Tbh (and this is repeating myself too) an AI really wouldn't have to take over nuclear launch codes to kill us if it wanted to anyway, just bury us in such a morass of misinformation and division that we end up killing ourselves.

Link to comment
Share on other sites

  • 1 month later...
3 minutes ago, The Blur said:

Just read that someone used AI to produce a video of Jamie Bulger talking about how he was killed.  Extremely harrowing and worrying about the future direction of this technology. 

Often wonder what goes through such peoples' minds, like tbe person/people behind doing such sick acts like this..

 

It's beyond disgusting and inhumane.

Edited by Wymsey
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...