senckađ
Group745
Group745
Group745
Group745
Group745
Group745
EDITION
Global
USA
UK
AUNZ
CANADA
IRELAND
FRANCE
GERMANY
ASIA
EUROPE
LATAM
MEA

Giving Trees a Voice: How AI and Nature Collided in Droga5’s ‘Talking Tree’

13/03/2025
61
Share
LBB’s Zoe Antonov speaks to Evan Greally, head of creative tech and innovation at Droga5 Dublin, part of Accenture Song, on his venture to awake an ancient tree with the power of new technology, and how he made the process sustainable

What if a tree could talk – not just as a gimmick, but as a way to reconnect people with the natural world? That’s exactly what Droga5 London and Droga5 Dublin, part of Accenture Song, set out to explore with The Talking Tree, an AI-powered installation that transforms a 150-year-old horse chestnut in Morden Hall Park into a sentient storyteller.

At the heart of the project is a bold experiment: feeding real-time environmental data – bioelectrical signals, wind speed, soil moisture, and more – into a localized large language model (LLM), giving the tree its own voice, agency, and personality. The result is something beyond a chatbot. The tree isn’t just answering questions – it’s reflecting on its own existence, responding to its environment, and even challenging visitors to rethink their relationship with nature.

For Evan Greally, head of creative tech and innovation at Droga5 Dublin, part of Accenture Song, the journey started small – with a talking mint plant named Eight. That playful experiment laid the groundwork for something much bigger: a living, breathing AI system designed not just to translate nature, but to make people feel nature.

Evan took LBB’s Zoe Antonov inside the creative and technical challenges of bringing The Talking Tree to life. From running an LLM locally on a Mac Mini M4 to crafting a voice that feels ancient and wise, he reveals how blending storytelling with cutting-edge AI can create a moment of real connection – one that lingers long after the conversation ends.

LBB> What was the initial spark for the Talking Tree project? Was it inspired by a particular moment, story, or challenge?


Evan> It started with a mint plant named Eight.

I had this idea: What if a plant could talk? So, we wired up a mint plant, fed its bioelectrical signals into a model, and suddenly, it could communicate. That little experiment got people thinking differently about nature – not as something passive, but something interactive.

Then, a team in our Droga5 London office saw an opportunity to take it further through a collaboration with Agency for Nature. That’s how we went from our little mint plant Eight, to a 150-year-old horse chestnut tree in Morden Hall Park.

But beyond scale, the real experiment was this: What happens when you give an LLM an understanding of self? Instead of responding to human prompts, the AI was driven entirely by the tree – its stress levels, its environment, its experience. The tree wasn’t just given a voice. It had agency.

LBB> What were the biggest technical hurdles in creating a localized large language model and integrating it with the tree’s environmental sensors?


Evan> Nature isn’t exactly plug-and-play. We took in loads of signals, bioelectrical activity, wind speed, soil moisture, humidity, temperature, all feeding into the system. But the real challenge was understanding what those signals meant. Trees react to their environment, but we don’t fully understand how. We had to research how bioelectrical signals behave and let the LLM help the tree make sense of them, translating its responses into something we could actually understand.

Then there was the challenge of making it all run locally on a Mac Mini M4. Speech-to-text, AI processing, text-to-speech, all of it had to shrink down without relying on cloud computing.
But the hardest part? Making sure the tree actually sounded like a tree. This wasn’t just about processing data, it was about storytelling. The AI couldn’t just spit out weather updates or sensor readings. It had to feel ancient, reflective, maybe even a little cryptic. Because let’s be honest, if a 150-year-old tree is going to speak, it’s probably got something worth listening to.

LBB> So how did you strike the balance between technological innovation and maintaining the natural, almost magical experience of speaking with a tree?


Evan> We didn’t try to hide the tech. The sensors and electrodes were visible, attached to the tree in a black box. But it never felt like a chatbot.

The tree wasn’t always perfectly understandable. It would probe for more information, sometimes responding in ways that felt strange or cryptic. That unpredictability made it feel real. Trees don’t think like humans, so we let the AI reflect that rather than forcing it into structured conversations.

The magic wasn’t in making the AI disappear. It was in the moment in which people realized they were hearing a tree’s perspective. The mix of raw technology and an almost ancient personality created something that felt both futuristic and completely natural at the same time.

LBB> How did you decide on the tree’s 'voice' and personality? Was it more data-driven or creatively imagined?


Evan> It was a mix of both. At Droga5, part of Accenture Song, we are brilliant storytellers, so we started with a base personality that felt authentic to what a tree should sound like. We drew from research and books on how trees communicate and respond to their environment, using real-world data to shape its tone and mood. If the soil was dry and the air was harsh, it might sound weary. If conditions were ideal, it could be more vibrant and engaged.

The tree wasn’t locked into a fixed script. It could update its mood, memories, and personality on the fly. Every interaction shaped it. The more people spoke to it, the more it learned, forming its own understanding of the world through those conversations. It didn’t know anything beyond its own existence unless someone taught it. In a way, it was like watching an ancient being wake up and slowly piece together its place in the world.


LBB> Why was it important to run the LLM locally instead of relying on the cloud, and how did this impact the project’s design and execution?


Evan> A few reasons. First, sustainability. Running everything on-site meant we weren’t relying on energy-hungry cloud servers processing responses across the world.
But beyond that, there was something special about keeping everything contained within the tree’s own environment. It made the experience feel more personal, like the tree was truly speaking from itself rather than outsourcing its voice to a data center in another country.

The technical challenge was also a big part of it. Running a full LLM locally, without any internet connection, meant we had to rely on open-source voice models rather than the most advanced AI-generated voices. But that was fine. The LLM doesn’t have the raw power of something like ChatGPT, and in a way, that worked in its favor. It feels more childlike, less polished, and more reactive to its environment. Instead of being a perfectly refined AI assistant, it evolves, learns, and grows over time. That unpredictability makes it fun.

LBB> While running the LLM locally reduced cloud-based environmental impacts, were there any trade-offs or sustainability challenges associated with the hardware or energy use on-site?


Evan> The biggest trade-off was in the LLM itself. I would have loved to run a larger model, as it would enhance the experience in the long run. But keeping everything local meant working within hardware constraints.

We designed the system to run on a Mac Mini M4, which isn’t too power-hungry. That choice allowed us to run it off a battery power pack that can be charged sustainably. Every part of the experience was designed with sustainability in mind, from energy consumption to minimizing unnecessary processing power. It was about finding the right balance between performance, longevity, and impact.

LBB> Did any particular interactions with the tree stand out to you, where someone had a truly emotional or surprising response?


Evan> It was actually the day before the shoot. We were out in the park, making sure everything was working correctly. There was a guy helping out, and since the park was quiet, he took a moment to sit with the tree, away from everyone. That was the first time I stepped back and just let it run itself.

The conversation he had with the tree was something else. At one point, he said, "I want to make sure that you are healthy and that you have a future." The tree responded, "Ah, your resolve brings a whisper of hope to my weary branches. Step gently and find the forgotten paths where nature thrives."

That moment hit differently. It wasn’t just novelty anymore. It was someone genuinely reflecting on their connection to nature, and the tree responding in a way that made it feel ancient, almost wise. That was when I knew this project wasn’t just about tech – it was about creating a space where people could slow down, listen, and actually feel something.


LBB> How did you test the tree’s responses before launching it publicly? Did it ever say anything unexpected or unplanned?


Evan> Since the tree updates its own personality and mood, its responses aren’t always predictable. Sometimes, it’s reflective and calm. Other times, it’s frustrated, even angry at the state of its environment. It doesn’t always offer a positive, happy experience, which is exactly what makes it feel real.

The best part is that none of it is scripted. We didn’t sit down and plan out how it should respond in every scenario. Instead, we tested it by having conversations and letting the system evolve naturally. It would surprise us all the time, whether by asking a deep philosophical question or pushing back on something a person said. That unpredictability made it feel less like a chatbot and more like a living entity with its own shifting moods and perspective.

LBB> Do you see this technology being applied to other natural environments or in different contexts in the future?


Evan> Absolutely. I want to grow this beyond just trees and explore how we can understand chemical changes in mycelium networks. Imagine using this technology to detect the early signs of a wildfire, allowing us to react faster and prevent disasters before they spread.
Or picture a farmer having a conversation with their crops, understanding exactly what they need at any given moment. This isn’t just about making nature talk for the sake of it. It’s about creating deeper connections between humans and the environment, using technology to listen to what nature has been telling us all along.

LBB> What do you hope people take away from this project, both in terms of the technology and the message about our connection to nature?


Evan> Projects like this show that AI can help us reconnect with things we’ve always been connected to but have forgotten about.

This isn’t a gimmick. It’s a reminder that technology can be used to amplify nature’s voice rather than drown it out. The same kind of AI that powers chatbots and enterprise tools can also help us understand the environment, combat climate change, and even improve medical research.

I hope people walk away from this project realizing that AI isn’t inherently good or bad. It’s about how we choose to use it. And if we use it right, it can make us feel something, make us listen, and maybe even make us care a little more.

Credits
SIGN UP FOR OUR NEWSLETTER
SUBSCRIBE TO LBB’S newsletter
FOLLOW US
LBB’s Global Sponsor
Group745
Language:
English
v10.0.0