-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Report running over 36000 .rm files #23
Comments
Happy 36k reportAlright. report is done! So what did I do? I wrote a python script that parallelizes running rmc over 36000 files coming from the scrybble database. TOC
Quick overviewThe report itself? What is success, what is failure? In practice, there is only one success condition, and that is exit code 0.There are two failure conditions
PC specs
Improvements for the next reportThese files contain no identifiable information other than the fact that the filenames themselves haven't been altered after having downloaded them from the remarkable tablet. For the next time, I think it might be best to hash the filenames too, just to add a slight extra layer of security. ConsiderationsWhile the svg output files are definitely on my computer, these are absolutely not included in this report, that is considered very private information, even I have not allowed myself to even as much as look at a single file. I have glanced at the folder output for a moment, and it does look like the output is going well overall. AddendaThis is the python script in its entirety. This script was written with the help of Claude to save some time whaha #!/usr/bin/env python3
import os
import glob
import time
import sqlite3
from pathlib import Path
from concurrent.futures import ProcessPoolExecutor, as_completed
from tqdm import tqdm
import subprocess
from datetime import datetime
import atexit
class ProcessingLogger:
def __init__(self, db_path="processing_log.db"):
self.db_path = db_path
self.setup_database()
atexit.register(self.close)
def setup_database(self):
self.conn = sqlite3.connect(self.db_path)
self.conn.execute("""
CREATE TABLE IF NOT EXISTS processing_runs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
start_time TIMESTAMP,
end_time TIMESTAMP,
total_files INTEGER,
successful_files INTEGER,
failed_files INTEGER,
total_duration REAL
)
""")
self.conn.execute("""
CREATE TABLE IF NOT EXISTS file_logs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
run_id INTEGER,
file_path TEXT,
status TEXT,
stdout TEXT,
stderr TEXT,
error_message TEXT,
processing_time REAL,
timestamp TIMESTAMP,
FOREIGN KEY (run_id) REFERENCES processing_runs(id)
)
""")
self.conn.commit()
def start_run(self, total_files):
cursor = self.conn.execute(
"INSERT INTO processing_runs (start_time, total_files) VALUES (?, ?)",
(datetime.now(), total_files)
)
self.current_run_id = cursor.lastrowid
self.conn.commit()
return self.current_run_id
def log_file(self, file_path, status, stdout, stderr, error_message, processing_time):
self.conn.execute("""
INSERT INTO file_logs
(run_id, file_path, status, stdout, stderr, error_message, processing_time, timestamp)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
""", (self.current_run_id, file_path, status, stdout, stderr, error_message,
processing_time, datetime.now()))
self.conn.commit()
def end_run(self, successful, failed, duration):
self.conn.execute("""
UPDATE processing_runs
SET end_time = ?, successful_files = ?, failed_files = ?,
total_duration = ?
WHERE id = ?
""", (datetime.now(), successful, failed, duration, self.current_run_id))
self.conn.commit()
def close(self):
if hasattr(self, 'conn'):
self.conn.close()
def process_file(file_path):
"""Process a single file using rmc command"""
start_time = time.time()
filename = os.path.basename(file_path)
output_path = f"python_out/{filename}.svg"
try:
result = subprocess.run(
['rmc', '-t', 'svg', '-o', output_path, file_path],
capture_output=True,
check=True,
text=True,
timeout=60
)
duration = time.time() - start_time
return (True, file_path, result.stdout, result.stderr, None, duration)
except subprocess.TimeoutExpired as e:
duration = time.time() - start_time
return (False, file_path, e.stdout, e.stderr, str(e), duration)
except subprocess.CalledProcessError as e:
duration = time.time() - start_time
return (False, file_path, e.stdout, e.stderr, str(e), duration)
def main():
# Create output directory
os.makedirs("python_out", exist_ok=True)
# Initialize logger
logger = ProcessingLogger()
# Find all .rm files
files = glob.glob("**/*.rm", recursive=True)
total_files = len(files)
print(f"Found {total_files} files to process")
# Start run in logger
run_id = logger.start_run(total_files)
# Track timing
start_time = time.time()
# Process files in parallel with progress bar
successful = 0
failed = 0
# Use number of CPU cores for parallel processing
max_workers = os.cpu_count()
with ProcessPoolExecutor(max_workers=max_workers) as executor:
# Submit all tasks
future_to_file = {executor.submit(process_file, file): file
for file in files}
# Process as they complete with progress bar
with tqdm(total=total_files, desc="Processing files") as pbar:
for future in as_completed(future_to_file):
success, file_path, stdout, stderr, error, duration = future.result()
# Log the result
logger.log_file(
file_path,
"success" if success else "failed",
stdout,
stderr,
error,
duration
)
# If there's any stderr output, print it even for successful runs
if not success:
tqdm.write(f"\nError processing {file_path}:")
tqdm.write(f"Error: {error}")
elif stderr:
tqdm.write(f"\nWarning in {file_path}:")
tqdm.write(stderr)
if success:
successful += 1
else:
failed += 1
pbar.update(1)
# Record final statistics
end_time = time.time()
total_duration = end_time - start_time
logger.end_run(successful, failed, total_duration)
# Print summary
print("\nProcessing complete!")
print(f"Total time: {total_duration:.2f} seconds")
print(f"Files processed: {successful}")
print(f"Files failed: {failed}")
print(f"Results logged to {logger.db_path}")
if __name__ == "__main__":
main() |
This is the actual report file. It's a sqlite db. |
|
@ricklupton I got in a videocall with @ChenghaoMou to fix some of the most common issues in the 36k files. We opened some PRs. |
The color id 9 error is referring to the highlight placeholder id, which I will fix once the color parsing in rmscene is merged. |
Hi!
I'm running rmc over 36000 .rm files :D
I'll be posting the report in this Github issue. The report will be fully anonymized, because.. yknow
The text was updated successfully, but these errors were encountered: