Wake The Town
Stuck in Motion
Contemplative Reptile
  • International Edition
  • USA Edition
  • UK Edition
  • Australian Edition
  • Canadian Edition
  • Irish Edition
  • German Edition
  • French Edition
  • Singapore Edition
  • Spanish edition
  • Polish edition
  • Indian Edition
  • Middle East edition
  • South African Edition

Our Big AI Experiment: The Good, the Bad and the Future?


In this special edition of 52INSIGHTS THIKNHOUSE discusses its experience working with machines such as Midjourney, ChatGPT and Stable Diffusion

Our Big AI Experiment: The Good, the Bad and the Future?

AI chat was big on news radar again this week as a Google chatbot engineer, dubbed ‘The Godfather of AI’, came out with warnings of dangers ahead. His main message? There are serious unknowns - and it’s difficult to prevent it being used for bad things. In an effort to dig deeper, this week we hosted a Fit To Win (our internal learning programme) session - a team screening of the AI Dilemma (highly recommended watch!). Meanwhile, we’re also seeing the adoption and use of AI accelerating across global audiences. It’s impacting how the business world needs to plan into the future - from radical ESG transparency to creativity. 

In our most recent AI exploration, THIKNHOUSE took on AI. 

This exploration was part of our F*ck It Friday - a day dedicated to testing, innovating and experimenting with new creative and tech solutions. We ‘took on AI’ to explore how the newest AI tools could be incorporated into our work. To do it, we decided to create a recruitment social media campaign for THINKHOUSE. The project wasn’t undertaken with a binary view of either winning or losing, we used it to explore potential avenues for implementing AI., and to identify areas where it is best to refrain. 

In this special edition of 52INSIGHTS we’ll discuss our experience working with machines such as Midjourney, ChatGPT, and Stable Diffusion - highlighting the benefits and limitations we found.


Midjourney is an AI tool that generates photo-realistic images from written prompts and is hosted on the social platform Discord. Midjourney particularly excels in creating detailed landscapes, being used frequently online to generate dramatic sci-fi or historical scenes. While the underlying technology and code are thought online to be based on Stable Diffusion, Midjourney has become a must-know programme for those using AI to generate work, due to its easy interface and impressive results. There’s no learning curve to create work, but perfecting it will take some time.

How did we use it?

We used Midjourney to get Stable Diffusion to run off of imagery that felt in line with Thinkhouse’s tone of voice. We trained one machine on the other, churning out images akin to past Thinkhouse photoshoots, and styles, even at times coming extremely close to finished pieces that would be acceptable for publishing without edits. We had trained the machine on these images, so it came as no surprise that it would be able to replicate this.

The Benefits:

- Time-saving: It can produce images in a matter of seconds, Midjourney was the most time-effective tool we used all day, in terms of generating finished products.

- Customization: The machine can be programmed to create images that match specific criteria, such as a particular style, colour scheme, or subject matter.

- Inspiration for Creativity: Midjourney can be a source of inspiration for designers, artists, and other creative professionals, providing them with new ideas and perspectives. 

The Limitations:

- Quality: The images didn’t always meet the same level of quality as images created by human photographers. designers or artists, especially for complex or nuanced images. Especially when featuring faces. (Always count the teeth!)

- Lack of originality: It lacked the unique characteristics and originality that come from human creativity and intuition. Though we based this campaign on Thinkhouse imagery - the products were so similar to previous work, that they became a derivative of it, rather than taking inspiration, as a person would.

- Bias: AI image generation algorithms may be biassed based on the data sets they were trained on, leading to underrepresentation or misrepresentation of certain groups of people or objects in images.

- Legal issues: The use of these AI-generated images raises legal questions around copyright ownership, licensing, and intellectual property rights.


ChatGPT is a large language model chatbot, where the bot performs the task of predicting the next word in a series of words. The AI innovation was developed by OpenAI (who also created Dall-E) and it has gained quick popularity, thanks to its ability to answer complex questions conversationally, and learn from each interaction.

How did we use it?

We looked at ChatGPT to develop strategy, copy and scripting for THINKHOUSE content. This was the first machine involved, and the last. We began dialogue with the application, requesting it to build us a strategy for a THINKHOUSE social media brief - after some back and forth,  ChatGPT’s recommendation was to create a video containing very specific imagery. Once we had worked with Stable Diffusion and Midjourney, we returned to ChatGPT video scripting (as it had included a voice over element to the content in its recommendations) and for the post copy. 

The Benefits: 

Knowledge: ChatGPT lowers the experience-level needed for developing social strategy, if it were to be used as a basis, young marketeers or students could employ the machine to help them create the bones of a plan. Since it’s learning off of research available online, this knowledge becomes more accessible, especially useful for those without access to entire teams of varying experience.

Copy: It’s not without its faults, but in general, the copy given by the machine isn’t terrible. We found with a broad brief - ChatGPT could create simple, standard copy that in an agency setting could provide great thought starters for more developed, professional copy. This suggests that it could also be used as a brainstorm  thought starter.

Time-Efficiency: A common thread in the AI bots we used, ChatGPT provided us with an entire strategy, video concept, copy and script within an hour of work. Traditionally, this would be done by entire teams as well, ChatGPT is a one-man show. 


Tone of Voice: For brand work, we’re going to see some less than stellar tonal shifts. For example, ChatGPT has a poor handle on youth-oriented language, when prompted it gave outdated language that could be defined as cringey “e.g rock on, client service legends' '. This doesn’t work for any brand that wants to speak to a young audience. 

Quality: Though the bot addressed the brief in record-time, the standard of its outputs was poor. Hence why we chose to move away from the original concept. There was a lack of innovation in its suggestion, as the machine is only concerned with ticking the boxes it has been prompted in the most logical way. We also have some concerns over its repeated use of “savage” as an irish slang term in its copy, which isn’t particularly politically correct. Overall, the product the traditional team produced was a more informed, creative piece of work, and was blatantly of a higher quality.


In the simplest terms, Stable Diffusion is a latent diffusion model. Give it a text prompt, it will return an image matching the text. The machine can also produce videography, and animations. Stable Diffusion belongs to a class of deep learning models called diffusion models. They are generative models (meaning they are made to generate new data similar to what they have seen in training). In the case of Stable Diffusion, like Midjourney, the data needed is images. 

How did we use it?

Similar to how we used Midjourney - Stable Diffusion was used for the visual aspects of the content, specifically to create an animated video, as per ChatGPT’s recommendation. Our creative technology team members built out prompts, and negative prompts in order to create the required imagery, and trained the machine off of the images Midjourney had produced for reference. 

The Benefits:

Quality: Stable Diffusion produces more realistic images compared to traditional text-to-image models, once it’s mastered in terms of prompting and controlling the machine, the quality of outputs is incredibly high. This due to its use of stable diffusion to gradually refine the generated image, it avoids the unpredictability that can occur otherwise.

Ability: Gives the user a lot of flexibility in terms of what we can create; there's a vast amount of models that are available to use online created by other users. There's also a vast number of plug-ins to use that enhances what we want to create. Typically, these could be out of scope depending on the individual’s skill level, time, or access to software and programmes.

Referencing: Stable Diffusion is a useful tool for quick mock-ups or reference imagery/animation. Rather than briefing creatives to put time into designs that may never go through approval, using a machine like this could result in a more cost-efficient way of creating reference imagery for marketeers.

The Limitations:

Time Efficiency: Unlike more user friendly machines, Stable Diffusion requires a decent amount of maths/prior experience with the software, so a lot of training hours will be needed to even operate the machine. On top of this, the accuracy of the image/animation can be very sensitive to parameter choices, which can be time-consuming to fine-tune. Our AI team, with members skilled in Stable Diffusion, spent a huge majority of their time attempting to get the machine to understand the given brief.

Content Limitations: We needed specific images to build the system off of to replicate that at all. Everything created by Stable Diffusion needs to be informed by other content that users supply, or use the Stable Diffusion database. At the time of our project, there were a lot of questions being raised about the source of this database, so to avoid copyright cases, marketeers will want to continue to only use content they personally have the rights and access to use.

Tools: Stable Diffusion’s AI image generation is highly dependent on computer hardware specifically with a graphics card VRAM. Lower end machines will still be able to generate imagery but speed will be slow.


There are multiple ways AI can be used today to experiment from a creative perspective. Getting to understand how to talk to AI via prompts, known as prompt engineering, is a valuable practice to explore. As this technology continues to advance, basic literacy in it will be an important skill to have developed - even if it’s just to understand what it’s all about. 

Copyright is one of the major potential issues when it comes to brands using AI to create content. For now, training AI image generating tools like DALL-E on owned creative assets, or using your own images as a prompt for Midjourney, is a relatively conscientious and responsible way to utilise the tools.

Younger audiences are using AI now as a creative and research tool. As discussed in this article, this has an impact on things like corporate and brand transparency, placing more importance on getting marketing right. 

Read more about our THINKHOUSE experiment's results here.

view more - Creative
Sign up to our newsletters and stay up to date with the best work and breaking ad news from around the world.
THINKHOUSE, Mon, 08 May 2023 09:15:00 GMT