iOS App For Estheticians.
Overview & Requirements
Stacy Stewart (a practicing esthetician in San Francisco) and her fiancé Stephen Butler approached me to help bring their MVP of a smart and simple digital assistant for estheticians to market. Stacy was frustrated that there was no simple way for her independent practice to keep track of clients and schedule follow-ups.
The goal was to design and develop an iOS app to help estheticians take their practice to the next level by keeping track of client sessions, taking clinical notes, capturing image markups, and scheduling timely follow-up reminders.
They already fleshed out a nearly complete wire-frame of how the MVP would be layed-out and what the initial features were.
I took my unique fullstack position to plan a wholistic approach that would push my learning while also meeting the time and financial goals for the project. I decided to use a declarative based UI with a reactive data flow. This can have huge time savings in building, updating and laying-out app interfaces, but with a drawback of more upfront learning.
I choose to use Apple’s new declarative SwiftUI framework over the traditional UIKit because of its relative simplicity and composability. The newness of SwiftUI meant having to deal with odd quirks, bugs, and the frustration of dealing with something new.
One of the most complex aspect of the SwiftUI views was the client image magnifier feature. Building this from scratch taught me a lot about small nuances of SwiftUI modifiers, gestures, coordinate spaces, and view reusability. The magnifier I built ended up being way more flexible then initially intended. Swift’s composability means that the magnifier will work on any view passed into it like Images, Nav Controllers, Buttons, or custom built views.
One of the hardest learning curves that came along with SwiftUI was the paradigm shift to reactive data. It ensures that the data viewed is the same as the backing data. When the data updates the view updates instantly to reflect the change.
The first goal I set after picking SwiftUI with a reactive data approach was doing research on how to store the data locally. This MVP was planned to be iOS with local data only at first with possibility to expand to cross-platform devices. I decided against CoreData as it’s iOS only and not designed to handle reactive data. I also wanted something mature and established in the dev community with ways to integrate with Apple’s Combine framework.
I ended up picking Realm because I had dabbled with it previously and it met all the requirements:
- Reactive and fast
- Popular and trusted by many mobile developers
- Offline first with optional two-way syncing with minimal dev effort
Combine with Realm
After picking Realm as the storage option, I needed to figure out how to fit it in with Combine and SwiftUI. Many hours of trial, error, and research lead to a relatively simple 4-layer data flow from Realm to SwiftUI.
After the low level storage layer of Realm I built a generic DataObserver layer. Its job is to observe Realm for changes and convert the changed data to abstracted structs and push them through Combine publishers so the DataStores can subscribe. This was needed because Realm on iOS had no native support for Combine at the time of building.
The DataStore layer is just grouped relational instances of DataObservers. These stores are accessed by SwiftUI through the EnvironmentObject or ObservedObject property wrappers. These wrappers auto update the corresponding views when the underlying publishers send new data.
Once I had the proper data stack in place, polishing the interface and experience was natural and quick. Through the process of both designing and developing I made a choice to try and skip over pinning down exact design and moving directly to building product. This lack of full understanding lead to confusion and wasted time later in the process. When doing something new I think it’s valuable to spend considerable time reflecting on how it went especially when you notice shortcomings while in the process.
The first step to learning or doing something new is accepting and having a realistic view of your current understanding. Without an accurate perception of your current understanding there is no starting place. There are many factors that can skew your perception of understanding. Your perception can be skewed above, where you think that you have greater understanding then you actually do or skewed below, where you think you have less understanding then you actually do. Both of these can have a big impact on how the learning process works. Overestimating will tend to lead to slower learning as you try and skip over small but important things while underestimating can lead to inaction for fear of not doing things “properly” or failure. I experienced both sides of this coin while learning about SwiftUI and reactive data flow for Facy.
The key to learning is being not afraid of jumping into to process no matter where you are. Like writing the first sentence in a book or drawing the first line on the canvas or in this case studying. So I decided on a plan that would try and save time by using SwiftUI’s quick turnaround time as the design tool.
- Skip high-fidelity design
- Build wire-frame in SwiftUI
- Add real data to the mock-up
- Fix bugs and polish experience at end
Study & Practice
Starting out, my study and practice were blended together. My limited knowledge of SwiftUI allowed me to build out the initial wire-frame quickly. While building out the UI I studied the gaps in understanding. Most of the study focused on the data and how it would flow into the user interface. While this worked well there is always a place to review what you’re doing and look for improvements.
S.I.P.D.E Review Process
- Scan Scene
- Identify Problems
- Predict Outcomes
- Decide What to Do
- Execute Decision
S.I.P.D.E is a useful decision making process usually taught in proactive driving courses. While reviews are reactive to the past, they are also proactive to the future. I thought this would make a good fit for reviewing my process with Facy to help with improving.
Being able to scan and identify variables which will have large future impacts isn’t always easy, although when working with others there are fundamentals that we can fall back on.
One of the most important of which I feel is transparent communication. While it’s easy to say it isn’t always easy to practice. I feel it’s a bedrock for setting clear expectations, giving honest critical feedback and building trust.
By skipping the high-fidelity design phase I was essentially trying to skip over the hard work of asking detailed questions and by doing so, pushing the transparent communication to the background. While skipping this step initially saved time it resulted in many hours of work that while lead to a deeper understanding of some things also lead to wasted time working on things that were not actually shipped.
Going forward I will spend more time questioning and understanding needs before actually trying to build and understand later. While I believe experimenting with trying to replace static detailed design docs with dynamic SwiftUI design worked okay, it come with too many drawbacks.
Through the whole process of building Facy I had a blast learning, building, and communicating with Stacy and Stephen to bring this simple but delightful product to market. You can install and see it firsthand if you have an iOS device running 13.2 or later.
If you are like Stacy and I we couldn't recommend Eli Slade highly enough for effectively getting an MVP out the door. He has a great way of delivering aesthetics that serve the functionality of the product in a timely and professional manner.
Let's Work Together
And make something beautiful!