Skip to Main Content

Research Guides

MIE542: Human Factors Integration

Generative AI and large language models (LLMs) are not the same as databases or search engine.

If you are thinking about doing research for your design using generative AI or a large language model (LLM) keep in mind that they are not the same thing as a database (like Compendex or LibrarySearch) or a search engine (like Google Scholar). LLMs were designed to generate human (like) text language, not for replacing academic research. The information used to train LLMs can come from questionable sources (like Reddit), often does not include the most current information and in some cases violates the intellectual rights of authors and creators. In addition, LLMs have been known to hallucinate (make up facts) and even make up entire references. Please remember that as a professional, or working with clients, your clients are expecting you to do the best ethical research possible. If you are having trouble finding information, please visit the reference desk and we can show you how to better use databases or search engines. 

There are of course other ways in which generative AI or LLMs can be useful in your work.

Assessing AI tools and output

Before using any AI tool, you first need to critically assess it as a tool. Not all AI tools are created equally, and all AI tools have embedded bias in them. Make sure you are comfortable with a tool before you use it (your client is counting on you to assess information you are using in your design!).

The link below will give you some criteria (similar to the CRAAP test) to help you assess whether you want to use an AI tool in your research. 

If you do choose to use AI, remember you still need to evaluate any information it provides you just as you would evaluate any other source like a website, journal article, book, etc. You can use a tool like the CRAAP test below to help you. Just remember that when you are determining the accuracy of the information provided by AI you may need to externally verify the information you found using information found in a database or other sources of original material not other AI generated output. 

Prompt engineering

In brief the process of prompt engineering is optimizing the prompts you provide an AI tool to get closer to your desired outcome or output. While prompt engineering can help somewhat with accuracy, even the most optimal prompts can still generate inaccurate results, so you will still need to evaluate any output. Below are some resources to help get you started on prompt engineering.  

What types of things can you use AI for in your reserach?

Your clients and the problems you are helping them find a design solution for will be diverse! Most of you will be learning about a situation you previously had little to no knowledge of (e.g., helping a library, designing a bike helmet, designing for an accessible need, etc.). Before you start to even think about solutions, you will need to do background research on various topics to get a base understanding of the area and also become comfortable with the language used so that you can use this terminology when using databases or search engineers to find research quicker and more accurately in the future. Traditionally, this is where students would use encyclopedias, Wikipedia, and general Google searches to understand common knowledge of a field. (Please see the link below to better understand common knowledge). Now a days, using prompt engineering, students can leverage AI tools to help them build up this common knowledge before doing heavier research in databases and search engines. If you do decide to go this route though, please do a quick check in original source materials just to make sure there were no hallucinations in the information the AI tool provided.  

U of T has a subscription to the enterprise version of Microsoft Copilot which students can use.  

chat loading...