Michael Grabowski, Ph.D., led the discussion on AI and hosted three other panelists. MANHATTAN.EDU/COURTESY
By Mary Haley, Asst. Social Media Manager
Students, faculty and alumni were welcomed to a panel to discuss the dangers and benefits of artificial intelligence (AI) in education and universities.
Michael Grabowski, Ph.D., chairperson of the communications department, led the event and hosted three panelists who came to talk about how AI has affected their fields and how it will impact the future workforce.
Grabowski began by entering a prompt into the ChatGPT chatbot, asking it to define what AI is. ChatGPT is an AI site developed by Open AI that mimics natural language processing in the form of a “chatbot”. ChatGPT answered by explaining that AI is “a branch of computer science that aims to create machines or software capable of mimicking or stimulating human intelligence.”
Grabowski explained that AI is rapidly affecting various fields of work and education through its many developments. ChatGPT as a chatbot has had a strong impact on education as it is extremely accessible for students to use to generate responses to homework and other assignments. Other AI technologies, such as image generation and facial recognition, have affected the film industry and the media by creating films and photographs that can be easily mistaken for being real. AI can also now be used for hire by employers at large companies.
Two Manhattan College alums and another panelist were invited to talk about their experiences with AI in their careers and their perspectives on the implementation of AI in the workplace.
Eileen Murray ‘80, Hon. D.Sc. ‘15, former co-CEO of Bridgewater Associates and Chair of the Financial Industry Regulatory Authority, spoke on how AI has recently affected her field.
Murray explained that in areas of finance where people would expect AI to be beneficial, such as collecting data and some accounting work, AI has not been used. In her experience, AI has been used in higher up work in finance, such as credit decisions, managing financial risk and even in quantitative trading. Murray says the technology is not new to the industry.
Even though AI is used in larger financial decisions, Murray clarified that all work that is done by AI is checked by professionals before any final decisions or proposals are made.
“All of the AI applications I’ve seen are coupled with people who are checking, ‘Is this information erroneous? Does it make sense?’,” Murray said. “So things like judgment, expertise and basically experience are really important when it comes to checking what [AI] is producing.”
Robert Otani, senior principal and chief technology officer of Thornton Tomasetti, a structural engineering firm, spoke on what happens with AI in his profession and how human expertise collaborates with the ideas generated by AI in engineering.
Thornton Tomasetti released an AI generator for engineering and planning for building structures known as Asterisk. Otani explained that this generator could design the plan for a building in under a minute versus the week it would take for a team of engineers to devise the same plan. The building plan from Asterisk was “about 85 to 90 percent correct”, Otani said.
“We’ve been improving those models over the years, and we’re kind of at a point now where we’re confident that [AI] will be an everyday use,” Otani said.
With the rapid development of AI in the workplace, Otani explained the responsibility of people to revise plans made by AI as the “stakes are so much higher” in engineering and making sure a building is safe.
He also recognized what information needs to be checked over when using AI and that not all AI sites can be used for the same input.
“If you’re asking ChatGPT about a recipe for blueberry pie, it’s not a big deal,” Otani said. “But if you’re [asking about] people’s lives, the stakes are much higher. We had to create a road map such that people use it in the right way.”
Along with Murray’s field in finance, AI has been established for years in the law field. Noreen Krall ‘87, J.D., retired chief litigation counsel and vice president of Apple Inc., explained that AI has been used within the past 10 to 15 years primarily in litigation in reviewing documents involved in lawsuits.
“[AI] is really a way of reworking the profession in a positive way where the efficiency went up and the quality of work went up,” Krall said.
In her own experience, Krall has recently experimented with ChatGPT to test out its potential to generate lawsuits and what the future holds for chatbots and other versions of AI in law.
Krall explained that just like the other panelists, the work that is generated by AI lacks a personal check and it is imperative that a professional should look over what is generated if a lawsuit made by AI was to be progressed in court.
“I think [AI] is going to be very efficient in the legal profession, but I don’t think it is going to take away work,” Krall said. “I think it’s going to change the way work is done. I think it’s going to be more about the [quality control] that goes around it…the personal component of what you’re doing. But I can certainly see the efficiencies being incredibly powerful.”