xx.xx.2025 - xx.xx.2025 / Week 11 - Week 14
IAN CHOO XIN ZHE / 0369451
Experiential Design / Bachelor of Design (Honours) in Creative Media
MIB
Final Project
Students will synthesise the knowledge gained in task 1, 2 and 3 for
application in task 4. Students will create integrate visual asset and
refine the prototype into a complete working and functional product
experience.
First, I exported all the completed app screens from Figma as PNG files to
keep the quality sharp and maintain any transparency where needed. After
that, I imported these PNG files into Unity, placing them inside the
Assets > UI > Images folder for better organization.
Once inside Unity, I converted the files from PNG textures into sprites so
that Unity could properly detect and use them within the UI system. This
step was important to make sure the designs would display correctly on the
canvas.
Next, I created a Canvas in
Unity, setting it to match the size of my UI screens for accurate scaling. I
then attached the exported screen designs onto the
Image layer so they would
display properly within the Canvas. To keep things organized, I created
individual Panels for each
screen in the MVP flow and placed the corresponding UI screen images inside
their respective panels.
After setting up the visuals, I moved on to making the screens interactive.
I added
transparent buttons over
the areas of the UI that needed to be clickable, such as navigation or AR
activation elements. These buttons were sized to perfectly match the
interactive parts of the screens, ensuring that the user experience would
align visually and functionally.
After finishing the UI setup, I moved on to making the app interactive. I
added a C# script in Unity
and wrote the instructions to handle navigation, allowing the buttons to
move between screens both forward and backward. Once the script was ready, I
applied it to the respective buttons in the UI. After testing, the
interactions worked as intended — the buttons successfully navigated through
the screens, making the MVP flow functional.
For the AR features, we decided it was best to start fresh with
a new Unity file. This allowed us to reorganize and arrange
everything in a cleaner and more structured way, making the
project easier to manage. As part of this process, I had to
re-import all of my UI PNG files from Figma into the new file.
Once imported, I set everything up again—making sure the
screens, panels, and buttons were properly placed—so that the UI
was ready to integrate smoothly with the AR functionality.
I began by setting up all the panels and screens again in the new Unity
file, making sure each one was properly organized and aligned to match the
MVP flow. Once the visual structure was in place, the next step was to work
on the interactions. This involved adding the necessary buttons and linking
them with scripts so users could navigate smoothly between the screens, just
like in the earlier version.
Once the panels and screens were ready, I moved on to setting up the buttons
for each page. This time, I applied what I learned from my previous
experience—making all the buttons transparent right from the start. This
made the process of importing and aligning everything much smoother, as I
could quickly place the buttons over the interactive areas without needing
extra adjustments. Overall, setting up the UI in this new file was faster
and more efficient compared to the first attempt.
After the buttons were in place, I moved on to setting up all the
interactions. I started by creating a new script in Unity, naming it
UIManager. This script was
designed to handle all the navigation logic, allowing the buttons to move
back and forth between the different pages smoothly. By centralizing the
navigation into this script, it made the setup more organized and much
easier to manage compared to hardcoding each button individually.
Inside the UIManager script,
I created blank objects to represent each panel in the app. I then linked
all the different panels to their respective sections within the script.
This setup ensured that Unity could correctly identify which panel to show
or hide whenever a button was clicked. By organizing it this way, the
navigation became more structured and reliable, making the interactions
between screens smooth and error-free.
Once the UIManager was set up and all the panels were linked, it was time to
apply the script to the buttons so they could actually function. I did this
by adding the OnClick function
to each button in Unity. From there, I attached the
UIManager script to handle the
navigation. For every button, I selected the appropriate function from the
dropdown menu based on its specific purpose—whether it was moving forward,
going back, or opening a specific panel. This made each button responsive
and ensured the navigation flow worked as intended.
After setting up the navigation, we moved on to linking the AR
functions to the
camera panel, which
marked the final step in completing the core functionality of
our app. Once everything was connected, we tested the app using
Unity’s
Scene view to
ensure all the buttons, panels, and AR features were working as
intended.
For the AR portion, we selected
three different image targets
to represent three different sneakers. Each sneaker could be
showcased by scanning its respective image target, which would
then display the 3D model in AR. Originally, our concept was
aimed at creating a
full AR try-on experience
where sneakers would appear directly on the user’s feet.
However, due to technical challenges, time limitations, and the
complexity of foot tracking, we made the decision to pivot.
Instead, the app became more of a
sneaker showcase experience. This allowed users to view high-quality 3D models of sneakers
in AR, rotate them freely, zoom in to see details, and switch
between different models using the UI. While it wasn’t the
original foot-scanning concept, this approach still delivered on
the idea of enhancing sneaker interaction through AR—showing the
designs in detail and creating a more immersive experience than
traditional online product images.
Once everything was tested and confirmed to be working properly,
I uploaded all the project files onto Google Drive to ensure
everything was backed up and accessible. After double-checking
that the files were organized and complete, I proceeded to
submit my work. It was a relief to have everything finalized and
turned in after all the effort put into setting up the app and
refining the AR features.
Final Reflection
Working on bringing our Sneakpeek app to life was
both a challenging and rewarding journey. At the
start, I felt a mix of excitement and nervousness —
excited because the idea was unique and close to my
interests in fashion and technology, but nervous
because I knew the technical demands ahead would be
tough, especially as a beginner in AR development.
One of the highlights was seeing our designs come
alive in Unity. Importing the UI from Figma and
making everything interactive felt like a huge
accomplishment. It was satisfying to watch buttons
respond as expected and to test the AR sneaker
models displaying in real time. Each small win
boosted my confidence and deepened my understanding
of the development pipeline — from design to code
integration.
That said, the process wasn’t without frustrations.
The biggest challenge was definitely the shift from
our original vision of AR foot scanning to a simpler
sneaker showcase. At first, it felt like a setback
because foot tracking was a feature I was really
eager to build. But reflecting on it, this pivot
taught me a valuable lesson about flexibility in
design and development — sometimes it’s better to
deliver a polished, achievable product than to get
stuck chasing a perfect but impractical feature.
Collaborating with Michael was another important
part of the experience. Splitting the work helped
manage the workload and allowed us both to focus on
our strengths—he concentrated on making the MVP
functional in Unity, while I focused on UI design
and visual direction. Communication and coordinating
our tasks were crucial and improved as we
progressed.
Overall, this project has been a huge learning
curve. I gained hands-on experience in AR
development, UI design for apps, and practical
problem-solving under time constraints. Beyond
technical skills, I learned how important it is to
be adaptable, to communicate clearly within a team,
and to maintain a user-centered mindset when
refining the product.
While I’m proud of what we’ve accomplished, I’m also
aware there’s so much more to explore and improve.
This project has ignited my interest in AR and UX
design even more, and I’m excited to apply these
lessons to future creative challenges.
SneakPeek UI
Drive link: https://drive.google.com/drive/folders/1XA_392wa76_kLV95LQcjGfdnjWD8qaQ3?usp=sharing
Partner's blog link: https://michaelbdcm.blogspot.com/2025/08/experiential-design-final-task.html
Comments
Post a Comment