Last week, I had the opportunity to attend the Collision Conference 2023 in Toronto. For those who might not be familiar, Collision is an annual technology conference that encompasses various fields including IT, medical, sports, automotive, and others. However, This year, there was a noticeable emphasis on artificial intelligence (AI), which piqued my interest as the CEO of GenAI Services, a consulting and project development company specializing in AI.
One of the sessions that stood out was a Q&A with Geoffrey Hinton, a prominent figure in AI, hosted by Nick Thompson, CEO of The Atlantic magazine. Now, allow me to paint the picture: I walked into the packed hall with palpable excitement in the air. People from diverse backgrounds and interests squeezed into every nook and cranny, all eager to hear from the man who had played an integral role in AI development. The moment Geoffrey Hinton walked onto the stage, the room grew silent.
As he conversed with Nick Thompson, Hinton delved into generative AI’s fascinating yet equally alarming potential. His words resonated with me: “We have to take seriously the possibility that [AI models] get to be smarter than us…and they have goals of their own.” I found myself nodding in agreement, almost in unison with the crowd.
When Hinton listed the six potential risks he associated with AI’s rapid development – bias, unemployment, online echo chambers, fake news, battle robots, and existential risks to humanity – it was like an electric current passed through the room. It made me realize how crucial ethical AI development is, and how central this would be in my work at GenAI Services.
Furthermore, Hinton brought up an intriguing point: he mentioned that AI might get or obtain a goal, which could be noble like stopping wars. However, he warned that the AI could determine that humanity itself is the obstacle in achieving this goal, which may lead to the frightening decision to extinct the human race. This thought was both chilling and eye-opening; it underscored the importance of establishing safeguards and ethical considerations in AI development.
A particularly eye-opening moment for me was when Hinton addressed the capability of AI models to reason. The sheer possibilities made my mind race – I couldn’t help but think about my work at GenAI Services. With AI models moving towards multimodal learning, incorporating text, visuals, and more, the potential is boundless. But Hinton’s cautions served as a reminder to tread carefully. It was a wake-up call and an inspiration all at once.
Moving past the excitement of that session, the conference in its entirety was a valuable experience for networking and learning. It offered a plethora of workshops, panel discussions, and opportunities to connect with professionals from various industries.
In summary, Collision Conference 2023 provided a well-rounded view of the current state of AI and other technologies. The experience reinforced the significance of collaboration and mindful advancement in AI. I returned with insights that will not only inform our approach at GenAI Services but also contribute to the broader conversation about the responsible and effective use of AI.