Skip to content

Troubleshooting

The converter exits immediately with no output

Check the input path. The most common cause is a missing or incorrect path:

# Verify the path exists
ls ./my-export

# Run with the correct path
uv run chatgpt-to-markdown ./my-export ./archive

Verify the export structure by checking for export_manifest.json at the root:

ls ./my-export/export_manifest.json

If the file is absent, the ZIP may have extracted with an extra nesting level. Inspect the directory:

unzip chatgpt_export.zip -d ./extracted
ls ./extracted/

MISSING ASSET comments appear in output Markdown

When an asset pointer cannot be resolved, the converter inserts an HTML comment instead of breaking:

<!-- MISSING ASSET: file-service://file-8Vk2ls8JSO2iOV... -->

Common causes:

  • The export was downloaded without attachments (large-file filters in some browsers)
  • The ZIP was only partially extracted
  • A file was manually deleted from the export directory

Fix: Re-download the full export ZIP, extract it completely, then re-run the converter.

ZIP input produces no conversations

Large exports split conversations across multiple conversations-*.json partitions. Verify they extracted:

ls ./my-export/conversations-*.json

If none are found, the ZIP structure may differ from expected. Open the ZIP and inspect its root-level contents.

Redacted metadata shows placeholder values

PII redaction is on by default. User email, phone number, and birth year are replaced with [REDACTED]. To preserve them:

uv run chatgpt-to-markdown ./export ./archive --no-redact-pii

Or set the environment variable:

export CONVERTER_REDACT_PII=false

Thinking blocks are missing from o-series conversations

Thinking/reasoning blocks are excluded by default. To include them:

uv run chatgpt-to-markdown ./export ./archive --include-thinking

Output filenames are truncated

Filenames are capped at 200 characters by default for cross-platform compatibility. To adjust the limit:

CONVERTER_MAX_FILENAME_LENGTH=150

Large export is slow

  • Deduplication adds SHA-256 hashing overhead. Disable it if speed matters more than storage efficiency:

    uv run chatgpt-to-markdown ./export ./archive --no-deduplicate
    
  • The converter loads all conversations into memory. For exports over 1 GB, ensure at least 2× the export size in available RAM.

The converter validates output integrity in Step 11 and logs a broken_links count at the end. Re-run and check the log output.

If links remain broken after re-running, open a GitHub issue with the validation summary from the log.

Next steps