Artificial Intelligence

NEW -> Contingent Buyer Assistance Program
irvinehomeowner said:
inv0ke-epipen said:
irvinehomeowner said:
Also, I don't think AI is not capable of what everyone is hoping.

What I am (and others) are cautious about is how to adopt AI into society safely.

And no one has addressed the issue of hacking and using AI for more nefarious purposes... HAL-9000 isn't just rainbows and unicorns.

There exists an entire industry dedicated to address the risk of hacking.

KeenLab does some great stuff targeting autopilot:https://keenlab.tencent.com/en/2019...imental-Security-Research-of-Tesla-Autopilot/

Exactly.

I didn't meant no one in the industry has addressed this, I mean the members here have not acknowledge that danger as credible when I brought it up here.

That article makes me trust autonomous driving less. :)

Honestly...the fear is not much different than people who think that they are safer driving a car than flying.  In an airplane, you have no control and rely on professional pilots and computers to get you from point A to point B....flying is significantly safer than driving.

People like having control...even if that control skews the benefits of that activity.
 
It's hard to stop autonomous driving. There are just too many powerful arguments for it
- Huge economic incentives
- Already safer than human driving
- Driving is boring. 63% American prefer not to drive if possible
- Better fuel economy and better for the environment

The public perception is quickly changing too. 52% of U.S. adults think automated vehicles are more dangerous than traditional vehicles operated by people. But that number was 86% in 2016.

California regulators has just proposed regulations that would allow vehicle manufacturers to deploy small self-driving delivery trucks on public roads.https://www.yahoo.com/news/california-proposes-steps-allow-small-184444591.html

Autonomous driving is here whether you like it or not.
 
Kenkoko said:
It's hard to stop autonomous driving. There are just too many powerful arguments for it
- Huge economic incentives
- Already safer than human driving
- Driving is boring. 63% American prefer not to drive if possible
- Better fuel economy and better for the environment

The public perception is quickly changing too. 52% of U.S. adults think automated vehicles are more dangerous than traditional vehicles operated by people. But that number was 86% in 2016.

California regulators has just proposed regulations that would allow vehicle manufacturers to deploy small self-driving delivery trucks on public roads.https://www.yahoo.com/news/california-proposes-steps-allow-small-184444591.html

Autonomous driving is here whether you like it or not.

I do think you guys are simplifying things here a bit.

While I do believe autonomous cars will happen I also think that there will be so many logistical issues that we won't see widespread usage for a while... possibly 10-20 years if not more.

For all those benefits you list, the hurdles are:

- Cost
- Safety (perceived or not)
- Resistance from labor organizations (as NSR posted)
- Scale (along with cost, you will have to replace or retrofit fleets of vehicles)
- Road infrastructure
- Connectivity/broadband infrastructure

But this is kind of going off of why I started this thread. There also has to be an implementation analysis of what is a better use of AI.

For me, rather than try to replace labor groups, I'd rather have AI focus on things we haven't solved... hunger (crop analysis, food production, etc), medical (Siri needs to get on that cancer issue), poverty (create opportunities for everyone to be able to earn money... UBI?), and conflict (bridge the gaps between cultures/countries).

I see Microsoft commercials that show how AI is used to map ruins, make beer, etc... but what about stuff to improve society (and yes, I realize autonomous cars do help but that's not high on my list of what we need).
 
irvinehomeowner said:
Kenkoko said:
It's hard to stop autonomous driving. There are just too many powerful arguments for it
- Huge economic incentives
- Already safer than human driving
- Driving is boring. 63% American prefer not to drive if possible
- Better fuel economy and better for the environment

The public perception is quickly changing too. 52% of U.S. adults think automated vehicles are more dangerous than traditional vehicles operated by people. But that number was 86% in 2016.

California regulators has just proposed regulations that would allow vehicle manufacturers to deploy small self-driving delivery trucks on public roads.https://www.yahoo.com/news/california-proposes-steps-allow-small-184444591.html

Autonomous driving is here whether you like it or not.

I do think you guys are simplifying things here a bit.

While I do believe autonomous cars will happen I also think that there will be so many logistical issues that we won't see widespread usage for a while... possibly 10-20 years if not more.

For all those benefits you list, the hurdles are:

- Cost
- Safety (perceived or not)
- Resistance from labor organizations (as NSR posted)
- Scale (along with cost, you will have to replace or retrofit fleets of vehicles)
- Road infrastructure
- Connectivity/broadband infrastructure

But this is kind of going off of why I started this thread. There also has to be an implementation analysis of what is a better use of AI.

For me, rather than try to replace labor groups, I'd rather have AI focus on things we haven't solved... hunger (crop analysis, food production, etc), medical (Siri needs to get on that cancer issue), poverty (create opportunities for everyone to be able to earn money... UBI?), and conflict (bridge the gaps between cultures/countries).

I see Microsoft commercials that show how AI is used to map ruins, make beer, etc... but what about stuff to improve society (and yes, I realize autonomous cars do help but that's not high on my list of what we need).

Those don't make money...no financial incentives to do those things.
 
Doesn't the Boeing Max8 crashes really answer all those concerns about AI and what exactly companies will do?

Yesterday was tax day. 

Automation has been impacting the back office of companies for a long time.  Look around your office though how many of your coworkers in your location have a large portion of their daily job routines that really isn't more complex than completing taxes? 

I did my taxes in basically an hour.  Import, import, import... review.
 
Oh...it's coming

The Chinese government has drawn wide international condemnation for its harsh crackdown on ethnic Muslims in its western region, including holding as many as a million of them in detention camps.

Now, documents and interviews show that the authorities are also using a vast, secret system of advanced facial recognition technology to track and control the Uighurs, a largely Muslim minority. It is the first known example of a government intentionally using artificial intelligence for racial profiling, experts said.
https://www.nytimes.com/2019/04/14/...artificial-intelligence-racial-profiling.html
 
Not sure if this is really an issue but there are concerns in AI about diversity bias:
https://www.engadget.com/2019/04/17/artificial-intelligence-diversity-disaster/

The lack of diversity within artificial intelligence is pushing the field to a dangerous "tipping point," according to new research from the AI Now Institute. It says that due to an overwhelming proportion of white males in the field, the technology is at risk of perpetuating historical biases and power imbalances.

The consequences of this issue are well documented, from hate speech-spewing chatbots to racial bias in facial recognition. The report says that these failings -- attributed to a lack of diversity within the AI sector -- have created a "moment of reckoning." Report author Kate Crawford said that the industry needs to acknowledge the gravity of the situation, and that the use of AI systems for classification, detection and predication of race and gender "is in urgent need of re-evaluation."

Indeed, the report found that more than 80 percent of AI professors are men -- a figure that reflects a wider problem across the computer science landscape. In 2015 women comprised only 24 percent of the computer and information sciences workforce. Meanwhile, only 2.5 percent of Google's employees are black, with Facebook and Microsoft each reporting an only marginally higher four percent. Data on trans employees and other gender minorities is almost non-existent.

So this may be "snowflaking" but given the state of US politics today, this is something to worry about.
 
irvinehomeowner said:
Not sure if this is really an issue but there are concerns in AI about diversity bias:
https://www.engadget.com/2019/04/17/artificial-intelligence-diversity-disaster/

The lack of diversity within artificial intelligence is pushing the field to a dangerous "tipping point," according to new research from the AI Now Institute. It says that due to an overwhelming proportion of white males in the field, the technology is at risk of perpetuating historical biases and power imbalances.

The consequences of this issue are well documented, from hate speech-spewing chatbots to racial bias in facial recognition. The report says that these failings -- attributed to a lack of diversity within the AI sector -- have created a "moment of reckoning." Report author Kate Crawford said that the industry needs to acknowledge the gravity of the situation, and that the use of AI systems for classification, detection and predication of race and gender "is in urgent need of re-evaluation."

Indeed, the report found that more than 80 percent of AI professors are men -- a figure that reflects a wider problem across the computer science landscape. In 2015 women comprised only 24 percent of the computer and information sciences workforce. Meanwhile, only 2.5 percent of Google's employees are black, with Facebook and Microsoft each reporting an only marginally higher four percent. Data on trans employees and other gender minorities is almost non-existent.

So this may be "snowflaking" but given the state of US politics today, this is something to worry about.

For sure...remember that article a few months back where TIC was selling license plate information to ICE?  It's coming.
 
Automation, like people, profile unless manually limited.  There have been repeated instances already of smart tool automation doing just that.  Amazon had a problem with automated job screening.  In self learning mode, the tool looked at existing traits an wallah, demographic profile created and the AI basically decided not to bother looking at women for software developer jobs.

irvinehomeowner said:
Not sure if this is really an issue but there are concerns in AI about diversity bias:
https://www.engadget.com/2019/04/17/artificial-intelligence-diversity-disaster/

The lack of diversity within artificial intelligence is pushing the field to a dangerous "tipping point," according to new research from the AI Now Institute. It says that due to an overwhelming proportion of white males in the field, the technology is at risk of perpetuating historical biases and power imbalances.

The consequences of this issue are well documented, from hate speech-spewing chatbots to racial bias in facial recognition. The report says that these failings -- attributed to a lack of diversity within the AI sector -- have created a "moment of reckoning." Report author Kate Crawford said that the industry needs to acknowledge the gravity of the situation, and that the use of AI systems for classification, detection and predication of race and gender "is in urgent need of re-evaluation."

Indeed, the report found that more than 80 percent of AI professors are men -- a figure that reflects a wider problem across the computer science landscape. In 2015 women comprised only 24 percent of the computer and information sciences workforce. Meanwhile, only 2.5 percent of Google's employees are black, with Facebook and Microsoft each reporting an only marginally higher four percent. Data on trans employees and other gender minorities is almost non-existent.

So this may be "snowflaking" but given the state of US politics today, this is something to worry about.
 
nosuchreality said:
Doesn't the Boeing Max8 crashes really answer all those concerns about AI and what exactly companies will do?

Yesterday was tax day. 

Automation has been impacting the back office of companies for a long time.  Look around your office though how many of your coworkers in your location have a large portion of their daily job routines that really isn't more complex than completing taxes? 

I did my taxes in basically an hour.  Import, import, import... review.

Not sure if this is what you were referring to but the Boeing issue seems more of a corporation/capitalism problem than a software problem.
https://www.youtube.com/watch?v=H2tuKiiznsY

Basically not enough testing and sales/business trumping over safety.
 
marmott said:
I don't think AI has anything to do with the MCAS on the 737 MAX. The MCAS "helps pilots bring the nose down in the event the jet's angle of attack drifted too high when flying manually, putting the aircraft at risk of stalling".

https://theaircurrent.com/aviation-safety/what-is-the-boeing-737-max-maneuvering-characteristics-augmentation-system-mcas-jt610/

MCAS is an automated safety system.  Key functioning, automated.  AI, is automation.  On one end, it's IBM Watson beating our behind on Jeopardy.  On the other, it's the traction control system and anti-lock brake systems in your car.

If the anti-lock brakes have a sensor error and decide the wheel is locked up, it releases the brake. Which then in the control loop is reapplied, re-released, reapplied, etc and you get the tat-tat-tat-tat of the anti-locks firing.

MCAS is much like that. Except MCAS takes over and points the nose down to gain speed overriding the pilots.  The companies in question, (Allegedly and in my opinion from here on in), apparently decide redundancy in electronic sensors would be a upgrade package costing more money to the basic MCAS safety package.  The second and third company, in order (Allegedly and in my opinion ) to contain costs in the planes, didn't buy those options, (if they were informed, IMO)

MCAS, anti-lock brakes, very limited automation for specific items (a specific case AI) that over-rides people.  Companies then make cost decisions, with all the faults of corporate culture that limits accountability, risk aversion etc, of group decision making.

Irvinecommuter said:
Basically not enough testing and sales/business trumping over safety.

Exactly.  Combine that with Silicon Valley's break something mentality.
 
nosuchreality said:
marmott said:
I don't think AI has anything to do with the MCAS on the 737 MAX. The MCAS "helps pilots bring the nose down in the event the jet's angle of attack drifted too high when flying manually, putting the aircraft at risk of stalling".

https://theaircurrent.com/aviation-safety/what-is-the-boeing-737-max-maneuvering-characteristics-augmentation-system-mcas-jt610/

MCAS is an automated safety system.  Key functioning, automated.  AI, is automation.  On one end, it's IBM Watson beating our behind on Jeopardy.  On the other, it's the traction control system and anti-lock brake systems in your car.

If the anti-lock brakes have a sensor error and decide the wheel is locked up, it releases the brake. Which then in the control loop is reapplied, re-released, reapplied, etc and you get the tat-tat-tat-tat of the anti-locks firing.

MCAS is much like that. <b>Except</b> MCAS takes over and points the nose down to gain speed overriding the pilots.  The companies in question, (Allegedly and in my opinion from here on in), apparently decide redundancy in electronic sensors would be a upgrade package costing more money to the basic MCAS safety package.  The second and third company, in order (Allegedly and in my opinion ) to contain costs in the planes, didn't buy those options, (if they were informed, IMO)

MCAS, anti-lock brakes, very limited automation for specific items (a specific case AI) that over-rides people.  Companies then make cost decisions, with all the faults of corporate culture that limits accountability, risk aversion etc, of group decision making.

Our posts crossed but I feel like the issue is not really a software issue but that Boeing simply didn't not properly test it before putting it into the market.  Had they done the proper testing and pay attention to pilot feedback, the issue would have been fix and avoided.  Instead, they rushed it (like most companies do) and figured that they will fix it in a future patch.

Edit:  Seems like the old Ford Pinto fuel tank issue. 
https://www.popularmechanics.com/ca...e-engineering-failures-ford-pinto-fuel-tanks/
 
From this video by Vox it seems like MCAS was installed as a fix for another problem that arose from upgrading the 737's engines:



nosuchreality said:
marmott said:
I don't think AI has anything to do with the MCAS on the 737 MAX. The MCAS "helps pilots bring the nose down in the event the jet's angle of attack drifted too high when flying manually, putting the aircraft at risk of stalling".

https://theaircurrent.com/aviation-safety/what-is-the-boeing-737-max-maneuvering-characteristics-augmentation-system-mcas-jt610/

MCAS is an automated safety system.  Key functioning, automated.  AI, is automation.  On one end, it's IBM Watson beating our behind on Jeopardy.  On the other, it's the traction control system and anti-lock brake systems in your car.

If the anti-lock brakes have a sensor error and decide the wheel is locked up, it releases the brake. Which then in the control loop is reapplied, re-released, reapplied, etc and you get the tat-tat-tat-tat of the anti-locks firing.

MCAS is much like that. Except MCAS takes over and points the nose down to gain speed overriding the pilots.  The companies in question, (Allegedly and in my opinion from here on in), apparently decide redundancy in electronic sensors would be a upgrade package costing more money to the basic MCAS safety package.  The second and third company, in order (Allegedly and in my opinion ) to contain costs in the planes, didn't buy those options, (if they were informed, IMO)

MCAS, anti-lock brakes, very limited automation for specific items (a specific case AI) that over-rides people.  Companies then make cost decisions, with all the faults of corporate culture that limits accountability, risk aversion etc, of group decision making.

Irvinecommuter said:
Basically not enough testing and sales/business trumping over safety.

Exactly.  Combine that with Silicon Valley's break something mentality.
 
bitmaster20 said:
From this video by Vox it seems like MCAS was installed as a fix for another problem that arose from upgrading the 737's engines:

Basically the new engine changed the thrust and would cause the pilots to take up at too high of angle.  It's pure physics...bigger engine that is mounted higher.

Problem is that Boeing kept selling the upgrade as "no change" from prior 737 so they didn't want to convey an image that pilots needed to be retrained.  It's somewhat similar to what Purdue Pharm was doing with oxycontin. 

Edit:  Human intelligence is already pretty good at wrecking the world...so AI is just doing more of the same.
 
nosuchreality said:
Automation, like people, profile unless manually limited.  There have been repeated instances already of smart tool automation doing just that.  Amazon had a problem with automated job screening.  In self learning mode, the tool looked at existing traits an wallah, demographic profile created and the AI basically decided not to bother looking at women for software developer jobs.

irvinehomeowner said:
Not sure if this is really an issue but there are concerns in AI about diversity bias:
https://www.engadget.com/2019/04/17/artificial-intelligence-diversity-disaster/

The lack of diversity within artificial intelligence is pushing the field to a dangerous "tipping point," according to new research from the AI Now Institute. It says that due to an overwhelming proportion of white males in the field, the technology is at risk of perpetuating historical biases and power imbalances.

The consequences of this issue are well documented, from hate speech-spewing chatbots to racial bias in facial recognition. The report says that these failings -- attributed to a lack of diversity within the AI sector -- have created a "moment of reckoning." Report author Kate Crawford said that the industry needs to acknowledge the gravity of the situation, and that the use of AI systems for classification, detection and predication of race and gender "is in urgent need of re-evaluation."

Indeed, the report found that more than 80 percent of AI professors are men -- a figure that reflects a wider problem across the computer science landscape. In 2015 women comprised only 24 percent of the computer and information sciences workforce. Meanwhile, only 2.5 percent of Google's employees are black, with Facebook and Microsoft each reporting an only marginally higher four percent. Data on trans employees and other gender minorities is almost non-existent.

So this may be "snowflaking" but given the state of US politics today, this is something to worry about.

damn misogynistic, white supremacist robots...
 
nosuchreality said:
MCAS is an automated safety system.  Key functioning, automated.  AI, is automation.  On one end, it's IBM Watson beating our behind on Jeopardy.  On the other, it's the traction control system and anti-lock brake systems in your car.

AI is absolutely not automation and the MCAS system has nothing to do with AI. The blog post below is good to understand the difference.
https://medium.com/@daveevansap/so-whats-the-real-difference-between-ai-and-automation-3c8bbf6b8f4b
 
marmott said:
nosuchreality said:
MCAS is an automated safety system.  Key functioning, automated.  AI, is automation.  On one end, it's IBM Watson beating our behind on Jeopardy.  On the other, it's the traction control system and anti-lock brake systems in your car.

AI is absolutely not automation and the MCAS system has nothing to do with AI. The blog post below is good to understand the difference.
https://medium.com/@daveevansap/so-whats-the-real-difference-between-ai-and-automation-3c8bbf6b8f4b

For sure...I think people talking about the two in tandem...the key result is that human jobs are being reduced/replaced.
 
It also creates an even lower class of jobs where the AI algos need to be fed with reliable data and only human can pre check this data.

Amazon does that through Amazon Mechanical Turk.

To a certain extent I would also consider content moderation from big social networks to be the same, content can be flagged by AI but only a human can make the final decision. And I would argue that it's better to work retail than these jobs:https://www.theverge.com/2019/2/25/...interviews-trauma-working-conditions-arizona.
 
Back
Top