View all on-demand sessions from the Intelligent Security Summit here.
The concept of immediacy is ingrained in 21st century life. From shopping on Amazon with next-day delivery to internet and location services that provide real-time information in the palm of our hands, it’s clear that instant results will only become more important in everyday life.
Last November, ChatGPT burst onto the scene with a lot of excitement. In an era where technology is jaded by many – and it’s becoming increasingly difficult to surprise and excite people about what it can do – ChatGPT has been a refreshing development. It’s engaging, can be a lot of fun to test drive, and has proven useful for students and professionals looking to generate content.
And it’s all arranged in an instant.
ChatGPt at first glance: what it can and cannot do
The introduction of ChatGPT has caused quite a stir about the state and potential of AI (amidst a long and sometimes troubled history). It has also caused hysteria among AI vendors. Who will be the winners and the losers? And what ethical considerations should be taken into account?
ChatGPT can generate simple explanations of complex topics, sketches, concepts, poems, raps, and even computer code (and point out problems in existing code). Clearly, this AI-powered tool has the potential to revolutionize the way professionals and students improve their productivity and efficiency and is likely to become an essential tool in anyone’s workflow.
It is important to remember that ChatGPT is an impressive research project that has yet to be produced for use in a consumer environment. The language model has been trained on a huge amount of historical, public data from the internet up to 2021, so there are limitations to consider when using it in 2023 and beyond.
The chatbot does not “understand” current events and does not surf the Internet to receive up-to-date information. It is also unable to integrate with data sources (to provide personalized answers), recognize the difference between public and private information, and guarantee factual answers.
ChatGPT and conversation AI: practical use
ChatGPT is already being used in practical enterprise use cases as a complement to conversational AI (CAI). Due to its open nature, it will always generate its own responses that cannot be controlled or modified.
This means (currently) that the technology is most successful when integrated with existing CAI platforms that provide full control over the responses an intelligent virtual assistant (IVA) can provide, in compliance with policies and regulations. It can also be used to reduce the amount of training required to teach a chatbot about a topic, quickly summarize large amounts of information, and shorten chatbots’ time to market, giving them a larger and process and respond to a more diverse number of interactions. better answer customer questions.
As it stands, ChatGPT remains open and unpredictable in what exactly it will say. This means that there must be checks before companies take this technology into production themselves.
AI ethics: a slippery slope
AI ethics are fluid and rapidly evolving. With the advancement, innovation and collaboration between LLMs and technologies such as CAI, there must be regulation and human oversight to ensure that AI systems are safe and accountable.
This is particularly important in a business environment, where there must be certainty about the range of possible responses from an AI system and rules that prevent AI from engaging in potentially harmful behavior.
Examples include the Microsoft AI chatbot Tay, which was shut down after he started posting offensive tweets, and Facebook bots that created their own language. However, it is important to note that AI systems can only respond based on the information they have been trained on and are unable to understand the meaning behind their responses.
To break that down, think of it this way: if you ask ChatGPT to write you a poem, it will give you lines of words that probably rhyme. The machine may “understand” what poems look like in terms of proper structure, rhyme schemes, and stanza lengths, but a poem is a creative expression of a human being.
ChatGPT can compose words into a structure similar to a poem and it may even be able to “describe” a poem’s meaning or metaphor, but it cannot feeling the meaning. If it cannot experience the feeling that a poem evokes, it is not a poet. Nor can it create a new form of poetry that has never existed before, as it can only generate through replication from the data it has been trained on.
Proactive versus reactive thinking
Technology is clearly moving faster than many people realize. OpenAI will eventually introduce GPT-4, a larger model, which means ChatGPT will get better at creating long and articulate responses that replicate human language in seconds.
Will our current regulations hold up at the same pace as this technology? The answer is no, which means it’s crucial to have open and ongoing discussions about the ethical implications of AI and make sure the legislation doesn’t get left in the dust.
Simply put, we need to be intelligent about how we deploy this kind of technology and start thinking about the most important question at hand: it’s no longer “can we do this?” But “should we do this?”
If we want technology and AI to move in this direction, we need to make sure we think proactively rather than reactively. There are many potential ethical and regulatory considerations to consider, such as data privacy and security, to ensure that the use of AI is in line with the best interests of humanity. We don’t want to be in a situation where hindsight is 20/20.
There is no doubt that ChatGPT is an impressive research project that will give the public a taste of how useful functional AI will be in their lives. The technology is now at the point where you can have robust “human-like” conversations and deliver on the promise of great experiences, which is the beauty of conversational AI.
But seldom does AI evolution involve one technology or breakthrough – it is often a collaboration involving a combination of approaches to different types of problems.
As a research laboratory Open AI aims to advance technology. However, that goal is not always shared with all of humanity. While we want to continue building technologies and AI that can innovate this world, we need to be mindful of the cost of doing so.
Nick Orlando is director of product marketing at Kore.ai
Data decision makers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
To read about advanced ideas and up-to-date information, best practices and the future of data and data technology, join DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers