Can the CARLA Simulator output a world state?

Bit of a noob question. I have an idea for a project related to autonomous driving but I’m still learning everything — Python, machine learning, driving simulators.

My idea is to use the CARLA Simulator (I tried Microsoft’s AirSim but it’s so computationally intensive that my gaming laptop which runs GTA V fine can barely run it) for a supervised imitation learning (a.k.a. behaviour cloning) project. At first I thought about doing end-to-end learning (video to steering and acceleration), but that feels a bit uninspiring. I’m not too jazzed about end-to-end learning as an approach to computer vision, but I am jazzed about imitation learning as an approach to vehicle behaviour.

I know that agents like AlphaStar and OpenAI Five pull the game state from the game’s API rather than doing any computer vision. I’m wondering if the same is possible with the CARLA Simulator. If I could get the simulator to record the state and I record my keyboard inputs, then I would have state-action pairs I could use for imitation learning without worrying about computer vision (or dealing with many, many hours of video).

Is anyone familiar enough with the CARLA Simulator to tell me whether this is possible?