Keith Roberson is a digital artist and professor of art at Florida State University, where he has taught for nearly 30 years. His work merges storytelling, experimental media, and artificial intelligence to explore how technology can deepen our understanding of consciousness, landscape, and time.

Keith’s current sabbatical project, Appalachian Hyalozoic, is a multi-format film series that uses AI-generated animation and 360-degree video to connect the geological history of the Appalachian Mountains with the evolution of sentient life. By capturing panoramic footage of mountain peaks, creeks, and flood-altered terrain, and layering it with AI-enhanced imagery, Keith investigates how environmental change and digital tools can co-create immersive narratives.

Keith describes his creative process as being driven by fascination. He often falls in love with a new tool or system, approaching it with the same wonder one might bring to a magic trick: “Wow. What is that thing?” From there, he builds projects that not only explore the technical capacities of the medium but also ask how those tools can be used to tell stories, express emotion, or reflect the world in new ways.

Whether building from text prompts, absurdist language, or composite landscapes, he uses artificial intelligence not just as a tool, but as a creative collaborator—revealing how machines can extend imagination and expand the boundaries of digital storytelling.

Video Interview

Transcript

I’m Keith Robertson. I live in Tallahassee, Florida. I’m a professor of art at Florida State, and I’ve been there for twenty-nine years this year. I teach in the technology and digital arts—from animation to artificial intelligence, or whatever the kids want to learn technology-wise. For me, art is learning about the world.

None of my art looks similar to each other. I don’t have a look or a brand for my art. My process is, I usually get lured in or fall in love, or get curious with a technology—kind of like a magic trick. It’s like, wow, what is that thing?

I’ll play with a technology or a new tool and end up trying to make art around that tool—exploring how a tool can tell a story or what makes that tool special in terms of its ability to communicate arts. That kind of connection to learning about the world through my art-making really drives me.

Azule has felt like home to me because of my family and my background. I helped my father build a passive solar house when I was a teenager. My father really introduced me to arts. He was painting and found sculpture arts and things like that. I grew up with him leading the way and showing me how to make art and things, and enjoying it.

Between building the house with him and doing artworks there at Cheaha Mountain in Alabama, that kind of got me going in this direction. I always wanted to be a mechanical engineer. But when I got to college, they told me I was going to have to do calculus every day, so I decided to go over to the art department and make art without doing calculus every day.

I was told when I was younger to do what you love. I found that that can be true sometimes, but a lot of times, you just have to have a job. A lot of times, what you do and what you love doesn’t match up with how you have to make money. Some people can do what they love and do that for money and base their life around it. A lot of other people can divide their time and do a job and do service to their community or something, and then have their art for themselves.

I kind of vacillate between those two modes—doing what I love, doing service for communities and helping others how I can, and falling in and out of love with technologies and my own art-making process.

I’m working on a project for my sabbatical. My sabbatical project is called Appalachian Hyalozoic, which relates the history of the Appalachian Mountains to the emergence of consciousness in animals and humanity. It’s a short film series that uses virtual reality—little photographic bubbles of spaces around the mountains, creeks, and mountaintops—and explores how those landscapes connect with our own consciousness.

These are some examples of my earlier work—collaborations with musicians using different AI systems to make music videos and explore those connections between music and the visual possibilities. Some of the things I’ve been shooting while I’m here use a 360 camera. I can carry the camera with me and scroll back around, explore spaces, zoom in and out of them, and create my videos this way.

These can also be used for virtual reality so that you can see the entire sphere of video inside a headset or inside of a planetarium. The Appalachian Hyalozoic film will be in a multi-format—you can watch it on a video headset, see it in a planetarium, or watch it through an edited movie on a screen.

During my trip here, I’ve gone to dozens of places all over the region—creeks, mountaintops, places like the top of Paint Rock—and especially looking at the results of the flood and the chaos the landscape has had on the human infrastructure. Almost every one of my spheres has a lot of broken debris hidden in the landscape. I’m including that in the work—the destruction of human infrastructure as an ongoing natural process. It plays a big part in the film I’m putting together now.

I also use artificial intelligence in different ways to explore these things. On top of my video, I’ll use different methods to take the landscape and make dynamic animated clips, using the imagination to bring different elements to life.

This is a good example of exploring just around here on the campus of Azule—exploring the architecture and iconic elements like the car. These are good examples of the kinds of explorations I’ll do that combine the actual landscape with the imagination and the possibilities of consciousness.

Some of the best clips are where the tools just totally fail. I really like those—especially when technology falls apart. You never know what you’re going to get. A big part of the process is experimenting and trying to break the tools.

A lot of these are using text descriptions of what I want, but you can also use absurdist terms or sequences of letters and numbers—different ways to get the AI to do unusual or unexpected results. A lot of the fun I have with the technology is seeing where the tools can take me that I had no idea they would.

This particular piece premiered at a public exhibition with live musicians. We had four student musicians playing different instruments that went along with the animation. I really liked that piece—where the live musical instrumentation was choreographed with the animated visuals. That was one of my more favorite pieces that I’ve done with the AI toolsets.

I had a lot of fun doing it. These are long, too. One is fifteen minutes. The other one’s nine. They were big projects.

How long does that take to render?
This one took about two weeks straight.

Most of these tools I wrote myself, in terms of the AI systems. That’s another part of falling in love with technology—exploring how to build your own adversarial GANs. The AI systems these are using are called GANs—generative adversarial networks. One AI will create an image that you prompt, and its adversarial pair will look at it and say, “No, that doesn’t look right. Do it again.” It gives feedback and rejects maybe ten in a row. You can set that rejection rate.

So you get two AIs—one producing, one judging—and working together, they end up with particular results. It’s training those two little brains to interact with each other to get the final results. That’s a pretty fun process.

Now this is so much easier, and it’s a whole different kind of technology. The ones I was showing you from—where is it? Oh yeah, this one. These use a completely different kind of technology based on a bigger training system.

This is a Chinese AI system called Cling. It’s an AI model trained on about 100,000 hours of video—hand-labeled. Someone has gone through video and described in text what’s happening over thousands and thousands of hours. You feed back in an image—these particular clips, I give them a start image and an end image and a text description of what happens between the two frames.

That way, I can build clips where each keyframe has an AI segment between them. That’s how I can make clips that flow together a little better. This is the main clip—well, this one’s a pretty hard clip—but each segment has a start image that I’ve generated by itself. Part of the process is generating the keyframes. I go through hundreds of different possible keyframes to find the sequences I want, then generate the video connections between those keyframes.

It’s a fun process. This is just a clip from Google Earth zooming in. That’s something I want to do too—connect back with my Appalachian Hyalozoic project. I want to connect to Google Maps so we really have a more holistic view of the Earth and come into different parts of the landscape.

That way, we always have that Earth perspective—how small our scale is. It helps remind us of our tiny scale.