The release of the ChatGPT model in 2022 by OpenAI excited the world and suggested to many that generative AI may power the next major expansion of the world economy. Before the release of ChatGPT, AI and machine learning were already routinely being used, for example, to suggest products from Amazon for customers to buy based on their shopping history, and the ability to translate text from one language to another by Google translate.
However, with the all progress in AI, there are still many theoretical developments required. For example, Large Language models can use billions of parameters, that require huge amounts of energy to train, and sometimes produce hallucinations (just wrong and stupid answers) to questions. Perhaps simpler AI and machine learning techniques may be better that are trained on smaller curated data. There are also important research directions and ethical issues, such as explainability in AI, and checking for biases in training data sets. Can the potential computational speed up from using quantum computers be used for AI?
This cross-cutting theme in theoretical foundations for data science and artificial intelligence aims to explore the above issues and questions.