Topics_Quiz grading problem

I have a problem in 3.10 Topics Quiz. I have received the same feedback two times. I have used the ros2 node info /topics_quiz to make sure that the \odom is subscribed, as you can see in the attached screenshot.

I received the following feedback again and again:

Feedback from the last successful autocorrection:
:heavy_check_mark: [12:50:03] [info] Setting up ROS2 environment (mark: 0)

:heavy_check_mark: [12:50:04] [info] ROS2 environment setup is okay (mark: 0)

:heavy_check_mark: [12:50:04] [assess] topics_quiz package found (mark: 1.0)

:heavy_check_mark: [12:50:09] [info] compiling package topics_quiz… (mark: 1.0)

:heavy_check_mark: [12:50:12] [assess] topics_quiz package compiled successfully (mark: 3.0)

:heavy_check_mark: [12:50:12] [info] Seeing if the package can be launched… (mark: 3.0)

:heavy_check_mark: [12:50:27] [assess] Can launch topics_quiz package successfully (mark: 4.0)

:heavy_check_mark: [12:50:39] [info] Checking that the odometry data is engaged… (mark: 4.0)

:heavy_multiplication_x: [12:51:01] [assess] Not subscribed to /odom. Let’s get this sorted first.
Please check:

  • Did you create a subscriber correctly to the topic /odom in your source code?
  • Did you use the right node name? It should be topics_quiz_node (mark: 4.0)

I do not know what should I do.

Hi,

Have you fixed that issue? Because form the tests I’ve done seems that your mark is 8, because the position of the robot at the end is a bit off of what is asked.

No, same issue.
I had the mark 8 in the second trial. Then I decided to modify the navigation section only. Now, my script works perfectly, but I have the mentioned feedback from the grading bot, and I have 4 out of 10. The grading bot says each mark is overwritten on the last one, so I have 4, and there is only one submission left.

Your current score is 8, as per your latest submission, and you have the following feedback.

I have assigned you two more trials to fine-tune the final part.

I figured out what my problem was. I end the program by pressing Ctrl+C before submitting for the grading because I thought that the grading bot runs it by itself. However, it seems that I should keep the program running and then submit it. Now, I have 10 out of 10.
Thanks and have a good day

No, you should not keep the program running. If you created the packages exactly as instructed, gradebot will run it for you. Running it in the terminal may interfere with grading.

So, there is still a problem with the grading bot. I kept the script running and submitted it for grading, and it gave me 10 out of 10. Now, I can see the solution. As I can see, the naming is the same, and I have followed the instructions carefully (in the main script, launch.py, and setup.py). I do not see any differences. I double-checked it.
The only difference is that my class was created by:
class TopicsQuiz(Node)
And the solution is :
class AutonomousExplorationNode(Node), which I do not think is the case.
My strategy for reading the laser rays and navigation is different, but I do not believe that is the problem either. Therefore, I guess the problem is with the grading bot. Somehow, if I do not keep the script running in the terminal, it gives me 4. My problem is fixed, but generally speaking, I think there is a problem with the grading bot.

Given that hundreds of other students do not have this problem, this is unlikely to be the case.

If you are getting a 4, it means the /odom subscription check is failing. Since the subscription is created eventually, it’s very likely that it’s created with an unusual delay. The gradebot checks for the subscription 20 seconds after launching your package. Leaving the node running in the terminal is probably compensating for this delay.

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.