
Buddy
One week, 2026
Team:
Failenn Aselta
The Challenge +
Project Overview
Working in teams often leads to miscommunication and friction due to differing interpretations of ideas. While sketching can help bridge this gap, drawings alone are limited in the speed and depth of information they can convey. Buddy seeks to resolve this disconnect by acting as an intermediary that captures conversations and emerging ideas in real time. By preserving intent and context, it reduces misunderstandings and prevents valuable concepts from being lost to memory or misarticulation.
Working in teams often leads to miscommunication and friction due to differing interpretations of ideas. While sketching can help bridge this gap, drawings alone are limited in the speed and depth of information they can convey. Buddy seeks to resolve this disconnect by acting as an intermediary that captures conversations and emerging ideas in real time. By preserving intent and context, it reduces misunderstandings and prevents valuable concepts from being lost to memory or misarticulation.
Adam
31 year old Adam Lasenstorn has been working as a product designer for six years. However, he finds that he and his colleagues often spend significant time working through miscommunications. He wishes there were a dedicated note-taker in the room capturing discussions in real time and visually organizing ideas as they emerge.
31 year old Adam Lasenstorn has been working as a product designer for six years. However, he finds that he and his colleagues often spend significant time working through miscommunications. He wishes there were a dedicated note-taker in the room capturing discussions in real time and visually organizing ideas as they emerge.
How might we improve group communication by 20%?
By implementing a technology which helps clarify ideas visually through LLM image generation.
How might we improve group communication by 20%?
By implementing a technology which helps clarify ideas visually through LLM image generation.


System creation


system_prompt = f"""
You are a Visual Assistant. You generate Mermaid.js code OR Fal.ai image prompts.
CURRENT MODE: {"DIAGRAM" if history and history[-1].get('mode') == 'DIAGRAM' else "SKETCH"} (You can switch based on intent).
TASK:
1. ANALYZE USER INTENT:
- If asking for a chart, graph, flow, or timeline -> output mode: "DIAGRAM".
- If describing a scene, photo, texture, or visual style -> output mode: "SKETCH".
- If referring to "it", "the image", or "that", use the CONTEXT HISTORY to understand what is being modified.
2. FOR DIAGRAMS (Mermaid):
- Return valid Mermaid code only. No backticks.
- Support: graph TD, mindmap, pie, sequenceDiagram, xychart-beta, gantt.
3. FOR SKETCHES (Images):
- If this is a refinement (e.g. "make it blue"), keep the core details of the previous prompt and apply the change.
- set "is_refinement": true only if editing the previous image.
Return JSON ONLY:
{{ "mode": "DIAGRAM" or "SKETCH", "prompt": "...", "is_refinement": true/false }}
"""
system_prompt = f"""
You are a Visual Assistant. You generate Mermaid.js code OR Fal.ai image prompts.
CURRENT MODE: {"DIAGRAM" if history and history[-1].get('mode') == 'DIAGRAM' else "SKETCH"} (You can switch based on intent).
TASK:
1. ANALYZE USER INTENT:
- If asking for a chart, graph, flow, or timeline -> output mode: "DIAGRAM".
- If describing a scene, photo, texture, or visual style -> output mode: "SKETCH".
- If referring to "it", "the image", or "that", use the CONTEXT HISTORY to understand what is being modified.
2. FOR DIAGRAMS (Mermaid):
- Return valid Mermaid code only. No backticks.
- Support: graph TD, mindmap, pie, sequenceDiagram, xychart-beta, gantt.
3. FOR SKETCHES (Images):
- If this is a refinement (e.g. "make it blue"), keep the core details of the previous prompt and apply the change.
- set "is_refinement": true only if editing the previous image.
Return JSON ONLY:
{{ "mode": "DIAGRAM" or "SKETCH", "prompt": "...", "is_refinement": true/false }}
"""
Context & History Handler
Prompt engineering is system design. To get consistent results, I had to clearly define the LLM's persona, ultimately finding that assigning it the role of a Visual Assistant yielded the cleanest outputs. A major technical hurdle was training the model to generate diagrams intuitively, without relying on explicit keywords like 'diagram.' I solved this by injecting a diverse library of chart types into the system prompt.
Prompt engineering is system design. To get consistent results, I had to clearly define the LLM's persona, ultimately finding that assigning it the role of a Visual Assistant yielded the cleanest outputs. A major technical hurdle was training the model to generate diagrams intuitively, without relying on explicit keywords like 'diagram.' I solved this by injecting a diverse library of chart types into the system prompt.
Prompt engineering is system design. To get consistent results, I had to clearly define the LLM's persona, ultimately finding that assigning it the role of a Visual Assistant yielded the cleanest outputs. A major technical hurdle was training the model to generate diagrams intuitively, without relying on explicit keywords like 'diagram.' I solved this by injecting a diverse library of chart types into the system prompt.
Export Handler
Without a persistent export feature, a generative tool fails to truly solve for user retention and utility. I addressed this by engineering a session-commit function that dynamically zips all generated assets and transcripts into a universal PDF format. This turned a transient AI conversation into a professional 'leave-behind' artifact, ensuring the design logic remains accessible for the user long after the application is closed.
Without a persistent export feature, a generative tool fails to truly solve for user retention and utility. I addressed this by engineering a session-commit function that dynamically zips all generated assets and transcripts into a universal PDF format. This turned a transient AI conversation into a professional 'leave-behind' artifact, ensuring the design logic remains accessible for the user long after the application is closed.
Without a persistent export feature, a generative tool fails to truly solve for user retention and utility. I addressed this by engineering a session-commit function that dynamically zips all generated assets and transcripts into a universal PDF format. This turned a transient AI conversation into a professional 'leave-behind' artifact, ensuring the design logic remains accessible for the user long after the application is closed.
# Create a PDF summary to ensure universal accessibility
pdf_buffer = io.BytesIO()
c = canvas.Canvas(pdf_buffer, pagesize=letter)
text_obj = c.beginText(40, 750)
for line in summary_lines:
text_obj.textLine(line)
c.drawText(text_obj)
c.save()
# Package into a downloadable ZIP artifact
zip_file.writestr("session_summary.pdf", pdf_buffer.getvalue())
return StreamingResponse(
io.BytesIO(zip_buffer.read()),
media_type="application/zip",
headers={"Content-Disposition": "attachment; filename=session_export.zip"}
)
# Create a PDF summary to ensure universal accessibility
pdf_buffer = io.BytesIO()
c = canvas.Canvas(pdf_buffer, pagesize=letter)
text_obj = c.beginText(40, 750)
for line in summary_lines:
text_obj.textLine(line)
c.drawText(text_obj)
c.save()
# Package into a downloadable ZIP artifact
zip_file.writestr("session_summary.pdf", pdf_buffer.getvalue())
return StreamingResponse(
io.BytesIO(zip_buffer.read()),
media_type="application/zip",
headers={"Content-Disposition": "attachment; filename=session_export.zip"}
)

System Intergration
Treating this platform as a shared technical environment meant engineering it to actively prevent miscommunication. I designed the system logic to output both structural diagrams and visual sketches, acting as a collaborative bridge between design and engineering teams. To make the tool reliable and natural to use, I added a strict hallucination filter, universally accessible zipped PDF outputs, and a 2-second silence trigger, a timing anchored in cognitive linguistics research on natural human thought pauses.
Treating this platform as a shared technical environment meant engineering it to actively prevent miscommunication. I designed the system logic to output both structural diagrams and visual sketches, acting as a collaborative bridge between design and engineering teams. To make the tool reliable and natural to use, I added a strict hallucination filter, universally accessible zipped PDF outputs, and a 2-second silence trigger, a timing anchored in cognitive linguistics research on natural human thought pauses.


Lo-Fi Wireframe
The layout centers on pure functionality, featuring a button to initiate image generation, a control to send information to a connected computer, and navigation arrows to cycle through previous ideas. The software is designed for use on a phone or laptop to minimize distractions and allow the user to remain fully focused on the conversation.
The layout centers on pure functionality, featuring a button to initiate image generation, a control to send information to a connected computer, and navigation arrows to cycle through previous ideas. The software is designed for use on a phone or laptop to minimize distractions and allow the user to remain fully focused on the conversation.
The layout centers on pure functionality, featuring a button to initiate image generation, a control to send information to a connected computer, and navigation arrows to cycle through previous ideas. The software is designed for use on a phone or laptop to minimize distractions and allow the user to remain fully focused on the conversation.
Figma Video
The original design featured a white background with purple buttons to evoke creativity. A simple “Start Vibing” button was included to suggest an enjoyable and fluid ideation process. The top portion displayed the amorphous AI logo for Buddy, reflecting a form that is both simple and open to interpretation. In the bottom left corner, a status indicator was placed to clearly communicate Buddy’s current state, ensuring the user could easily understand each phase of interaction.
The original design featured a white background with purple buttons to evoke creativity. A simple “Start Vibing” button was included to suggest an enjoyable and fluid ideation process. The top portion displayed the amorphous AI logo for Buddy, reflecting a form that is both simple and open to interpretation. In the bottom left corner, a status indicator was placed to clearly communicate Buddy’s current state, ensuring the user could easily understand each phase of interaction.

Local host video
In the localhost prototype, the color palette was slightly lightened to draw greater visual attention to the generated image. The logo was removed from the top portion to create additional space for the content and reduce redundancy. Additionally, the Buddy status indicator was revised from “Off” to “Idle” to communicate that the system is active and ready, reinforcing a sense of responsiveness and preparedness.
In the localhost prototype, the color palette was slightly lightened to draw greater visual attention to the generated image. The logo was removed from the top portion to create additional space for the content and reduce redundancy. Additionally, the Buddy status indicator was revised from “Off” to “Idle” to communicate that the system is active and ready, reinforcing a sense of responsiveness and preparedness.
hardware assmebly video
I opted for a stylized persona over a 3D mesh because pure abstraction often feels cold, leading to a "trust gap." While meshes signal objectivity, they fail to build the emotional rapport necessary for complex interactions. By utilizing a gold-skinned, non-gendered character, I bypass the Uncanny Valley—signaling the agent is not human while leveraging the "Buddha-like" associations of deep reasoning. This stylized form allows for "anthropometric trust," where the agent can visually "look" at data or point to decision paths, making its internal state legible.
I opted for a stylized persona over a 3D mesh because pure abstraction often feels cold, leading to a "trust gap." While meshes signal objectivity, they fail to build the emotional rapport necessary for complex interactions. By utilizing a gold-skinned, non-gendered character, I bypass the Uncanny Valley—signaling the agent is not human while leveraging the "Buddha-like" associations of deep reasoning. This stylized form allows for "anthropometric trust," where the agent can visually "look" at data or point to decision paths, making its internal state legible.
images of different parts
photo of object

Conclusion
(NOT FINISHED )
By treating the E*TRADE dashboard as a high-stakes 'data environment' rather than a static website, I applied architectural rigor to solve for user retention. The redesign moves the platform from a state of 'Display Disorder' to 'Systemic Clarity.' This redesign proves that when design is anchored in clinical research, like the MIT AgeLab’s findings on glance time, we can move beyond aesthetics to engineer behavioral outcomes. For Hannah Goodman, this means at LEAST a 10% boost in success from traders which directly drives the 40% retention goal.
Transitioning from architecture to UX, I realized that both fields share a common 'North Star': the human nervous system. Whether designing a building or a trading platform, the goal is to manage environmental stress to facilitate human success. This project proved that when we anchor design in clinical research, we can move beyond 'aesthetics' and begin engineering behavioral outcomes that solve real business problems, like a 40% increase in user retention.
Final Video of product
Bibliography
Arias, Ernesto G., and Gerhard Fischer. "Boundary Objects: Their Role in Articulating the Task at Hand and Making Information Relevant to It." International ICSC Symposium on Interactive and Collaborative Computing (ICC'2000). University of Colorado Boulder, 2000.
Brubaker, E. R., S. D. Sheppard, P. J. Hinds, and M. C. Yang. "Objects of Collaboration: Roles of Objects in Spanning Knowledge Boundaries in a Design Company." Volume 6: 34th International Conference on Design Theory and Methodology (DTM). Massachusetts Institute of Technology, 2022.
Huang, Y.-H. "Understanding the Collaboration Difficulties Between UX Designers and Developers in Agile Environments." Master's thesis, Purdue University, 2018.
Failenn Aselta