Skip to content
View DamonsJ's full-sized avatar

Block or report DamonsJ

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Popular repositories Loading

  1. dhSegment dhSegment Public

    Forked from dhlab-epfl/dhSegment

    Generic framework for historical document processing

    Python 1

  2. FormatConverter FormatConverter Public

    convert 3d format and gdal

    C++ 1 1

  3. AABBRayTriangle AABBRayTriangle Public

    kd tree for ray tracing : aabb triange intersect

    C++ 1

  4. image2markdown image2markdown Public

    upload a image with text and formula, return you markdown code

    JavaScript 1

  5. TorchFFTOnnxExporter TorchFFTOnnxExporter Public

    export torch fft/ifft operators to onnx

    Python 1

  6. DamonsGraphic DamonsGraphic Public

    basic compute graphic alogrithm

    C++

90 contributions in the last year

Contribution Graph
Day of Week March April May June July August September October November December January February March
Sunday
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
Less
No contributions.
Low contributions.
Medium-low contributions.
Medium-high contributions.
High contributions.
More

Contribution activity

March 2025

Created 1 repository

Opened their first pull request on GitHub in alshedivat/al-folio Public template

DamonsJ created their first pull request!

First pull request

feat: add tufte css theme Public template

Created an issue in NVIDIA/TensorRT-Model-Optimizer that received 2 comments

slower when quantize whole bert model than quantize only ffn

I used mtq.quantize to quantize torch bert model to tensorrt. when I only quantize ffn like this: CUSTOM_INT8_SMOOTHQUANT_CFG["quant_cfg"]["layernorm…

2 comments
Opened 2 other issues in 2 repositories
Started 2 discussions in 2 repositories
Loading