User Rating 0.0
Total Usage 0 times
0 characters
Shortcuts: Ctrl+Enter Generate · Ctrl+S Export PNG · R Reset View · +/- Zoom
Is this tool helpful?

Your feedback helps us improve.

About

Unstructured text - lecture notes, meeting transcripts, research papers - contains implicit hierarchical relationships that human readers must mentally reconstruct. This tool automates that reconstruction. It sends your raw text to a large language model (Groq Cloud, Llama 3-70B) which identifies the central topic, major themes, subtopics, and supporting details, then returns a structured tree. The tree is rendered as an interactive mind map on an HTML Canvas with pan, zoom, drag, collapse, and export capabilities. Without the API key, a client-side heuristic parser segments text by paragraph boundaries, sentence length, and keyword frequency to approximate a hierarchy. Approximation quality degrades for texts lacking clear structural cues (e.g., stream-of-consciousness prose). The layout algorithm distributes nodes radially from the root, computing bounding boxes via measureText and applying minimum angular separation of 15° between siblings to prevent overlap.

Practical failure mode: feeding the tool text shorter than 50 words yields shallow, uninformative maps. Conversely, inputs exceeding 8000 tokens may be truncated by the LLM context window. For best results, supply 200 - 3000 words of structured or semi-structured content. The tool persists your last input and generated map to localStorage, so refreshing the page does not lose work. Pro tip: if your source material has numbered lists or headings, the AI produces significantly better hierarchies.

mind map generator text to mind map AI mind map interactive mind map note visualization text analysis concept mapping brainstorming tool

Formulas

The radial tree layout positions each child node at an angle computed from its index among siblings. For a parent at position (px, py) with n children, the i-th child is placed at:

xi = px + r cos(θi)
yi = py + r sin(θi)
θi = θstart + i θspann

where r is the radial distance (proportional to depth level, typically 250px × depth), θstart is the starting angle allocated to this subtree, and θspan is the angular sweep assigned based on the subtree's total descendant count relative to its siblings. The sweep allocation follows:

θspan(node) = θparent-span count(node)count(parent)

where count(node) returns 1 + the sum of descendant counts for all children. Bézier edge control points are computed at the midpoint between parent and child with a perpendicular offset of 0.2 × the distance, producing gentle curves. Node bounding box width is capped at 200px; text lines are wrapped when measureText(line).width > maxWidth 2 padding.

Reference Data

FeatureSpecificationNotes
AI ModelLlama 3-70B (Groq Cloud)Temperature 0.3, max 8192 tokens
Fallback ParserClient-side heuristicParagraph & keyword-based segmentation
Max Input Length8000 tokens (~6000 words)Truncated beyond context window
Min Recommended Input50 wordsBelow this, maps are too shallow
Max Tree Depth5 levelsDeeper nesting degrades readability
Max Nodes200Layout performance constraint
Node Bullet Points1 - 5 per nodeKey details from source text
Export FormatPNG (2× resolution)High-DPI canvas export
PersistencelocalStorageAuto-save input & map state
Pan & ZoomMouse drag, scroll wheel, pinchTouch-enabled for mobile
Node InteractionClick collapse, drag reposition, double-click editAll nodes interactive
Layout AlgorithmRecursive radial treeAngular separation ≥ 15°
Edge RenderingQuadratic Bézier curvesSmooth curved connections
Keyboard ShortcutsCtrl+Enter, Ctrl+S, R, +/-Generate, Export, Reset, Zoom
AccessibilityWCAG 2.1 AAaria-live, focus management, contrast ≥ 4.5:1
API Key SecurityIn-memory onlyNever persisted to storage
Color CodingDepth-based hue rotationRoot blue, children shift through palette
Text WrappingCanvas measureTextMax 200px node width
Mobile SupportTouch pan, pinch zoomResponsive textarea & controls
Print Support@media printCanvas exported, controls hidden

Frequently Asked Questions

The tool falls back to a client-side heuristic parser that segments text by paragraph boundaries, identifies potential headings (short lines under 60 characters), and clusters sentences under those headings. Quality is lower than the AI path - it cannot infer implicit relationships or generate bullet-point summaries. For structured text with clear headings, fallback results are acceptable. For unstructured prose, results will be flat and less useful.
The Groq Llama 3-70B model supports 8192 tokens (roughly 6000 English words). Input beyond this limit is truncated from the end before submission. The tool displays a warning toast if truncation occurs. To avoid information loss, split very long documents into sections and generate separate maps, or condense your text before pasting.
The radial layout algorithm allocates angular sweep proportional to subtree size, but at high node counts (above 100 nodes), minimum angular separation constraints can conflict with available space. The tool enforces a minimum gap of 15° between siblings, which may push outer nodes into each other's space at depth 4 or beyond. Manually dragging overlapping nodes or collapsing subtrees resolves this. The exported PNG captures the current visual state including any manual adjustments.
No. The API key is held in JavaScript memory only and is never written to localStorage or any other persistent store. Text is sent directly to api.groq.com over HTTPS. No intermediary proxy is used. The tool makes no other network requests. All rendering, layout, and interaction logic runs entirely in the browser.
Yes. The PNG is exported at 2× device pixel ratio for high-DPI clarity. Background is white. Typical output at default zoom is 3000 - 6000px wide depending on map complexity. The image contains no watermarks or branding. For print use, note that very large maps may need to be scaled or split across pages.
The parser applies three heuristics in sequence: (1) Lines under 60 characters that don't end in a period are treated as potential headings. (2) Paragraphs separated by double line breaks become sibling groups under the nearest preceding heading. (3) Within each paragraph, sentences containing words that appear frequently across the entire text (top 10 by frequency, excluding stop words) are promoted as sub-node labels, with remaining sentences becoming bullet points. This is a rough approximation - it works well for lecture notes with clear section headers, poorly for continuous narrative.