Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Report running over 36000 .rm files #23

Open
Azeirah opened this issue Dec 12, 2024 · 7 comments
Open

Report running over 36000 .rm files #23

Azeirah opened this issue Dec 12, 2024 · 7 comments

Comments

@Azeirah
Copy link
Contributor

Azeirah commented Dec 12, 2024

Hi!

I'm running rmc over 36000 .rm files :D

I'll be posting the report in this Github issue. The report will be fully anonymized, because.. yknow

@Azeirah
Copy link
Contributor Author

Azeirah commented Dec 12, 2024

image

@Azeirah
Copy link
Contributor Author

Azeirah commented Dec 12, 2024

Happy 36k report

Alright. report is done! So what did I do?

I wrote a python script that parallelizes running rmc over 36000 files coming from the scrybble database.

TOC

  • Quick overview
  • PC specs
  • Report database schema
  • What is success? What is failure?
  • Considerations
  • Improvements for follow-up report
  • Addendum (python script generating the report)

Quick overview

image

image

The report itself?

What is success, what is failure? In practice, there is only one success condition, and that is exit code 0.

There are two failure conditions

  1. Any exit code other than 0
  2. I configured a 60 seconds timeout for a single subprocess. If rmc takes over 60 seconds, it's considered a failure. This parameter can of course be adjusted for future runs, but hey. Gotta start somewhere whaha

PC specs

  • Ryzen 7900x, 12 cores, AMD am5 platform.
  • 64GB ram ddr5
  • Running NixOS

Improvements for the next report

These files contain no identifiable information other than the fact that the filenames themselves haven't been altered after having downloaded them from the remarkable tablet. For the next time, I think it might be best to hash the filenames too, just to add a slight extra layer of security.

Considerations

While the svg output files are definitely on my computer, these are absolutely not included in this report, that is considered very private information, even I have not allowed myself to even as much as look at a single file. I have glanced at the folder output for a moment, and it does look like the output is going well overall.

Addenda

This is the python script in its entirety. This script was written with the help of Claude to save some time whaha

#!/usr/bin/env python3

import os
import glob
import time
import sqlite3
from pathlib import Path
from concurrent.futures import ProcessPoolExecutor, as_completed
from tqdm import tqdm
import subprocess
from datetime import datetime
import atexit

class ProcessingLogger:
    def __init__(self, db_path="processing_log.db"):
        self.db_path = db_path
        self.setup_database()
        atexit.register(self.close)

    def setup_database(self):
        self.conn = sqlite3.connect(self.db_path)
        self.conn.execute("""
            CREATE TABLE IF NOT EXISTS processing_runs (
                id INTEGER PRIMARY KEY AUTOINCREMENT,
                start_time TIMESTAMP,
                end_time TIMESTAMP,
                total_files INTEGER,
                successful_files INTEGER,
                failed_files INTEGER,
                total_duration REAL
            )
        """)
        self.conn.execute("""
            CREATE TABLE IF NOT EXISTS file_logs (
                id INTEGER PRIMARY KEY AUTOINCREMENT,
                run_id INTEGER,
                file_path TEXT,
                status TEXT,
                stdout TEXT,
                stderr TEXT,
                error_message TEXT,
                processing_time REAL,
                timestamp TIMESTAMP,
                FOREIGN KEY (run_id) REFERENCES processing_runs(id)
            )
        """)
        self.conn.commit()

    def start_run(self, total_files):
        cursor = self.conn.execute(
            "INSERT INTO processing_runs (start_time, total_files) VALUES (?, ?)",
            (datetime.now(), total_files)
        )
        self.current_run_id = cursor.lastrowid
        self.conn.commit()
        return self.current_run_id

    def log_file(self, file_path, status, stdout, stderr, error_message, processing_time):
        self.conn.execute("""
            INSERT INTO file_logs 
            (run_id, file_path, status, stdout, stderr, error_message, processing_time, timestamp)
            VALUES (?, ?, ?, ?, ?, ?, ?, ?)
        """, (self.current_run_id, file_path, status, stdout, stderr, error_message, 
              processing_time, datetime.now()))
        self.conn.commit()

    def end_run(self, successful, failed, duration):
        self.conn.execute("""
            UPDATE processing_runs 
            SET end_time = ?, successful_files = ?, failed_files = ?, 
                total_duration = ?
            WHERE id = ?
        """, (datetime.now(), successful, failed, duration, self.current_run_id))
        self.conn.commit()

    def close(self):
        if hasattr(self, 'conn'):
            self.conn.close()

def process_file(file_path):
    """Process a single file using rmc command"""
    start_time = time.time()
    filename = os.path.basename(file_path)
    output_path = f"python_out/{filename}.svg"
    
    try:
        result = subprocess.run(
            ['rmc', '-t', 'svg', '-o', output_path, file_path], 
            capture_output=True, 
            check=True, 
            text=True,
            timeout=60
        )
        duration = time.time() - start_time
        return (True, file_path, result.stdout, result.stderr, None, duration)
    except subprocess.TimeoutExpired as e:
        duration = time.time() - start_time
        return (False, file_path, e.stdout, e.stderr, str(e), duration)
    except subprocess.CalledProcessError as e:
        duration = time.time() - start_time
        return (False, file_path, e.stdout, e.stderr, str(e), duration)

def main():
    # Create output directory
    os.makedirs("python_out", exist_ok=True)
    
    # Initialize logger
    logger = ProcessingLogger()
    
    # Find all .rm files
    files = glob.glob("**/*.rm", recursive=True)
    total_files = len(files)
    print(f"Found {total_files} files to process")
    
    # Start run in logger
    run_id = logger.start_run(total_files)
    
    # Track timing
    start_time = time.time()
    
    # Process files in parallel with progress bar
    successful = 0
    failed = 0
    
    # Use number of CPU cores for parallel processing
    max_workers = os.cpu_count()
    
    with ProcessPoolExecutor(max_workers=max_workers) as executor:
        # Submit all tasks
        future_to_file = {executor.submit(process_file, file): file 
                         for file in files}
        
        # Process as they complete with progress bar
        with tqdm(total=total_files, desc="Processing files") as pbar:
            for future in as_completed(future_to_file):
                success, file_path, stdout, stderr, error, duration = future.result()
                
                # Log the result
                logger.log_file(
                    file_path,
                    "success" if success else "failed",
                    stdout,
                    stderr,
                    error,
                    duration
                )
                
                # If there's any stderr output, print it even for successful runs
                if not success:
                    tqdm.write(f"\nError processing {file_path}:")
                    tqdm.write(f"Error: {error}")
                elif stderr:
                    tqdm.write(f"\nWarning in {file_path}:")
                    tqdm.write(stderr)

                if success:
                    successful += 1
                else:
                    failed += 1
                
                pbar.update(1)
    
    # Record final statistics
    end_time = time.time()
    total_duration = end_time - start_time
    logger.end_run(successful, failed, total_duration)
    
    # Print summary
    print("\nProcessing complete!")
    print(f"Total time: {total_duration:.2f} seconds")
    print(f"Files processed: {successful}")
    print(f"Files failed: {failed}")
    print(f"Results logged to {logger.db_path}")

if __name__ == "__main__":
    main()

@Azeirah
Copy link
Contributor Author

Azeirah commented Dec 12, 2024

@Azeirah
Copy link
Contributor Author

Azeirah commented Dec 12, 2024

@ChenghaoMou

" base_color_id)
  File ""/home/lb/.pyenv/versions/3.10.14/lib/python3.10/site-packages/rmc/exporters/writing_tools.py"", line 48, in __init__
    self.base_color = RM_PALETTE[base_color_id]
KeyError: 9
"

"RASE_AREA:
  File ""/home/lb/.pyenv/versions/3.10.14/lib/python3.10/enum.py"", line 437, in __getattr__
    raise AttributeError(name) from None
AttributeError: ERASE_AREA. Did you mean: 'ERASER_AREA'?
"

"/site-packages/rmc/exporters/writing_tools.py"", line 224, in __init__
    super().__init__(base_width, base_color_id)
TypeError: Pen.__init__() missing 1 required positional argument: 'base_color_id'
"

"ome/lb/.pyenv/versions/3.10.14/lib/python3.10/site-packages/rmc/exporters/svg.py"", line 130, in build_anchor_pos
    ypos += LINE_HEIGHTS[p.style.value]
KeyError: <ParagraphStyle.CHECKBOX_CHECKED: 7>
"

" File ""/home/lb/.pyenv/versions/3.10.14/lib/python3.10/site-packages/rmc/exporters/svg.py"", line 130, in build_anchor_pos
    ypos += LINE_HEIGHTS[p.style.value]
KeyError: <ParagraphStyle.BULLET2: 5>
"

"n3.10/site-packages/rmscene/tagged_block_common.py"", line 63, in read_header
    raise ValueError(""Wrong header: %r"" % header)
ValueError: Wrong header: b'reMarkable .lines file, version=5          '
"

"
  File ""/home/lb/.pyenv/versions/3.10.14/lib/python3.10/site-packages/rmc/exporters/svg.py"", line 130, in build_anchor_pos
    ypos += LINE_HEIGHTS[p.style.value]
KeyError: <ParagraphStyle.BASIC: 0>
"

"File ""/home/lb/.pyenv/versions/3.10.14/lib/python3.10/site-packages/rmc/exporters/svg.py"", line 130, in build_anchor_pos
    ypos += LINE_HEIGHTS[p.style.value]
KeyError: <ParagraphStyle.CHECKBOX: 6>
"

"n3.10/site-packages/rmscene/tagged_block_common.py"", line 63, in read_header
    raise ValueError(""Wrong header: %r"" % header)
ValueError: Wrong header: b'reMarkable .lines file, version=3          '
"


"WARNING:rmscene.text:Unknown formatting code in text: 1
WARNING:rmscene.text:Unknown formatting code in text: 2
"
"WARNING:rmscene.scene_stream:Error reading block: Bad tag type 0x0 at position 144
"
"WARNING:rmscene.tagged_block_reader:Some data has not been read. The data may have been written using a newer format than this reader supports.
"

"nown formatting code in text: 1
WARNING:rmscene.text:Unknown formatting code in text: 2
WARNING:rmscene.text:Unknown formatting code in text: 1
WARNING:rmscene.text:Unknown formatting code in text: 2
"

@Azeirah
Copy link
Contributor Author

Azeirah commented Dec 12, 2024

image

This was referenced Dec 12, 2024
@Azeirah
Copy link
Contributor Author

Azeirah commented Dec 12, 2024

@ricklupton I got in a videocall with @ChenghaoMou to fix some of the most common issues in the 36k files. We opened some PRs.

@ChenghaoMou
Copy link
Contributor

The color id 9 error is referring to the highlight placeholder id, which I will fix once the color parsing in rmscene is merged.

@Azeirah Azeirah mentioned this issue Dec 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants