Post

Fixing Lidar orientation and testing episodes

Index

Data based on orientation

The laser always takes its measurements from an absolute orientation, so we cannot rely on the indices for angular searches. The way of accessing the indices has been modified based on the current orientation of the vehicle.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
heading_deg = np.degrees(heading)
# Asignar 'inf' a los puntos donde no hay obstáculos ([0, 0, 0]).
lidar_data[np.all(lidar_data == [0, 0, 0], axis=1)] = float('inf')

relative_lidar = lidar_data - car_pose
distances = np.linalg.norm(relative_lidar, axis=1)

lidar_resolution = 360/len(distances)

index_90 = int(round(heading_deg / lidar_resolution))
index_270 = int(round((heading_deg + 180) / lidar_resolution))

distance_90 = distances[index_90]
distance_270 = distances[index_270]

This makes the calculation of angles more intuitive and the index is rounded so that there are no access problems. For simplicity only one lidar beam is used.

Testing q-learning

This part has been a bit more complicated, the exploitation factor has been modified with an exponential decay algorithm, now the agent explores many states, but fails to learn what it needs to learn.

1
def __init__(self, epsilon=0.99, min_epsilon=0.01, decay_rate=0.97, alpha=0.1, gamma=0.9):
1
2
3
def decay_epsilon(self):
    """Reduce epsilon según la tasa de decremento."""
    self.epsilon = max(self.min_epsilon, self.epsilon * self.decay_rate)

This is how the decay is displayed in 125 episodes, which is the maximum I have been able to test because it crashes.

Crash 150 eps

For some reason I can’t get through that number of episodes, which I don’t know if it should be enough for the agent to learn.

This post is licensed under CC BY 4.0 by the author.

Trending Tags