About the Intermediate Generative AI for Robotics category

(Replace this first paragraph with a brief description of your new category. This guidance will appear in the category selection area, so try to keep it below 200 characters.)

Use the following paragraphs for a longer description, or to establish category guidelines or rules:

  • Why should people use this category? What is it for?

  • How exactly is this different than the other categories we already have?

  • What should topics in this category generally contain?

  • Do we need this category? Can we merge with another category, or subcategory?

user:~/ros2_ws/src/imitation_learning$ python3 imitation_learning/main_.py
Loading CSV data…
Traceback (most recent call last):
File “/home/user/ros2_ws/src/imitation_learning/imitation_learning/main_.py”, line 122, in
train_model()
File “/home/user/ros2_ws/src/imitation_learning/imitation_learning/main_.py”, line 42, in train_model
df = load_csv(CSV_PATH)
File “/home/user/ros2_ws/src/imitation_learning/imitation_learning/dataprep.py”, line 10, in load_csv
df_vel_cam = pd.read_csv(csv_path)
File “/usr/local/lib/python3.10/dist-packages/pandas/io/parsers/readers.py”, line 1026, in read_csv
return _read(filepath_or_buffer, kwds)
File “/usr/local/lib/python3.10/dist-packages/pandas/io/parsers/readers.py”, line 626, in _read
return parser.read(nrows)
File “/usr/local/lib/python3.10/dist-packages/pandas/io/parsers/readers.py”, line 1923, in read
) = self._engine.read( # type: ignore[attr-defined]
File “/usr/local/lib/python3.10/dist-packages/pandas/io/parsers/c_parser_wrapper.py”, line 234, in read
chunks = self._reader.read_low_memory(nrows)
File “parsers.pyx”, line 838, in pandas._libs.parsers.TextReader.read_low_memory
File “parsers.pyx”, line 905, in pandas._libs.parsers.TextReader._read_rows
File “parsers.pyx”, line 874, in pandas._libs.parsers.TextReader._tokenize_rows
File “parsers.pyx”, line 891, in pandas.libs.parsers.TextReader.check_tokenize_status
File “parsers.pyx”, line 2061, in pandas.libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: out of memory
user:~/ros2_ws/src/imitation_learning$ python3 imitation_learning/main
.py
Loading CSV data…
Traceback (most recent call last):
File "/home/user/ros2_ws/src/imitation_learning/imitation_learning/main
.py", line 122, in
train_model()
File "/home/user/ros2_ws/src/imitation_learning/imitation_learning/main
.py", line 42, in train_model
df = load_csv(CSV_PATH)
File “/home/user/ros2_ws/src/imitation_learning/imitation_learning/dataprep.py”, line 10, in load_csv
df_vel_cam = pd.read_csv(csv_path)
File “/usr/local/lib/python3.10/dist-packages/pandas/io/parsers/readers.py”, line 1026, in read_csv
return _read(filepath_or_buffer, kwds)
File “/usr/local/lib/python3.10/dist-packages/pandas/io/parsers/readers.py”, line 626, in _read
return parser.read(nrows)
File “/usr/local/lib/python3.10/dist-packages/pandas/io/parsers/readers.py”, line 1923, in read
) = self._engine.read( # type: ignore[attr-defined]
File “/usr/local/lib/python3.10/dist-packages/pandas/io/parsers/c_parser_wrapper.py”, line 234, in read
chunks = self._reader.read_low_memory(nrows)
File “parsers.pyx”, line 838, in pandas._libs.parsers.TextReader.read_low_memory
File “parsers.pyx”, line 905, in pandas._libs.parsers.TextReader._read_rows
File “parsers.pyx”, line 874, in pandas._libs.parsers.TextReader.tokenize_rows
File “parsers.pyx”, line 891, in pandas.libs.parsers.TextReader.check_tokenize_status
File “parsers.pyx”, line 2061, in pandas.libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: out of memory
user:~/ros2_ws/src/imitation_learning$ sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
user:~/ros2_ws/src/imitation_learning$ python3 imitation_learning/main
.py
Loading CSV data…
Killed
user:~/ros2_ws/src/imitation_learning$ python3 imitation_learning/main
.py
Loading CSV data…
Traceback (most recent call last):
File "/home/user/ros2_ws/src/imitation_learning/imitation_learning/main
.py", line 122, in
train_model()
File "/home/user/ros2_ws/src/imitation_learning/imitation_learning/main
.py", line 42, in train_model
df = load_csv(CSV_PATH)
File “/home/user/ros2_ws/src/imitation_learning/imitation_learning/dataprep.py”, line 10, in load_csv
df_vel_cam = pd.read_csv(csv_path)
File “/usr/local/lib/python3.10/dist-packages/pandas/io/parsers/readers.py”, line 1026, in read_csv
return _read(filepath_or_buffer, kwds)
File “/usr/local/lib/python3.10/dist-packages/pandas/io/parsers/readers.py”, line 626, in _read
return parser.read(nrows)
File “/usr/local/lib/python3.10/dist-packages/pandas/io/parsers/readers.py”, line 1923, in read
) = self._engine.read( # type: ignore[attr-defined]
File “/usr/local/lib/python3.10/dist-packages/pandas/io/parsers/c_parser_wrapper.py”, line 234, in read
chunks = self._reader.read_low_memory(nrows)
File “parsers.pyx”, line 838, in pandas._libs.parsers.TextReader.read_low_memory
File “parsers.pyx”, line 905, in pandas._libs.parsers.TextReader._read_rows
File “parsers.pyx”, line 874, in pandas._libs.parsers.TextReader._tokenize_rows
File “parsers.pyx”, line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
File “parsers.pyx”, line 2061, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: out of memory