AI evolution and the Ethical Dilemmas surrounding it. Will we overcome these growth blockers? | Artificial intelligence

intelligence is fast evolving to the point where anyone with the skills now, has access to the tools and platforms needed to make it happen. But is it time to stop and think before we plunge headlong into cognitive chaos?

With all the great power comes great responsibility, and developers and executives are being cautioned not to build, or rely on, the black boxes that have characterized AI up to this point. Recently, Bank of America and Harvard University teamed up to convene the Council on Responsible use of AI, which will bring together, educate and enlighten business, government and societal leaders on the latest technological developments in AI and machine learning, discuss emerging legal, moral, and policy implications, and investigate ways of developing responsible AI platforms.

IBM recently, proposed transparency docs for AI research and development. It insists that, artificial intelligence should come with a transparent document that outlines lineage, specifications and directions.

In layman words, they propose to generate industry standards like many other sectors adhere to.

You think, these practices will build on trust in the consumers(end users)? By just making technology transparent, we can the growth barriers?

You might also like

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. AcceptRead More