It always feels alive. And, that's really important for your expectation and understanding of the interface, to be comfortable with it. To realize that it's always going to respond to you when you need it. And, that applies as well to changes in motion, not just to the start of an interaction, but when you're in the middle of an interaction, and you're changing.
It's important for us to be responsive to interruption as well. So, a good example is multitasking on iPhone So, we have this pause gesture where you slide your finger up halfway up the screen, and pause, and so we need to figure out how to detect this change in motion.
And so, how do we do this? How do we detect this change in motion? Should we use a timer? Should we wait until your finger has come below a certain velocity for a few amount of time, and then bring in the multitasking cards? Well, it turns out that's too slow. People expect to be able to get to multitasking instantly. And, we need a way that can respond as fast as them. So, instead we look at your finger's acceleration. It turns out there's a huge spike in the acceleration of your finger when you pause.
And, actually the faster you stop, the faster we can detect it. So, it's actually responding to the change in motion, as fast as we know how, instead of waiting for some timer. So, this is a good example of responding to redirection as fast as possible. So, this is the concept of interruption and redirection. This stuff makes the interface feel really, really connected to you. Next, we want to talk a little bit about the architecture of the interface.
How you lay it out, conceptually. And, we think when you're doing that, it's important to maintain spatial consistency throughout movement. What does that mean? This kind of mimics the way our object persistence memory works in the real world.
So, things smoothly leave and enter our perception in symmetric paths. So, if something disappears one way, we expect it to emerge from where it came? So, if I walked off this stage this way, and then emerged that way, you'd be pretty amazed, right? Because that's impossible. So, we wanted to play into this consistent sense of space that we all have in the world. And so, what that means is, if something is going out of view in your interface, and coming back into view, it should do so in symmetric paths.
It should have a consistent offscreen path as it enters and leaves. A good example of this is actually iOS navigation. When I tap on an element in this list here, it slides in from the right. When I tap the back button, it goes back to the right. It's a symmetric path. Each element has a consistent place where it lives at both states. This also reinforces the gesture. If I choose to slide it myself to the right, because I know that's where it lives, I can do that.
It's expected. So, what if we didn't do this. Here's an example, where when I tap on something, it slides in, and then when I hit back it goes down. And, it feels disconnected and confusing, right? It feels like I'm sending it somewhere. In fact, if I wanted to communicate that I was sending it somewhere, this is how I could do it, right? So, that's the topic of spatial consistency. It helps the gesture feel aligned with our spatial understanding of the world.
Now, the next one is to hint in the direction of the gesture. You know, we humans are always, kind of, predicting the next few steps of our experience. We're always using the, kind of, trajectories of everything that's happening in the world to predict the next few steps of emotion. So, we think it's great when an interface plays into that prediction. So, if you have two states here, initial state and a final state. The object-- and you have an intermediate transition. The object should transition smoothly between these two states in a way that it grows from the initial state to the final state, whether it's through a gesture or an animation.
So, good example is Control Center actually. We have these modules here in Control Center, where as you press they grow up and out towards your finger in the direction of the final state, where it actually finally just pops open. So, that's hinting. It makes the gestures feel expected, and predictable. Now, the next important principle is to keep touch interactions lightweight.
You know the lightness of multitouch is one of the most underrated aspects of it, I think. It enables the airy swipes and scrolls, and all the taps and stuff that we're all used to. It's all super lightweight. But, we also want to amplify their motion. You want to take a small input and make a big output, to give that satisfying feeling of moving or throwing something and having a magnified result. So, how does this apply to our interfaces?
Well, it starts with a short interaction. A short, lightweight interaction. And, we use all our sensors, all our technology, to understand as much about it. To, sort of, generate a profile of energy and momentum contained within the gesture. Using everything we know, including position, velocity, speed, force, everything we know about it to generate a kind of, inertial profile of this gesture.
And then we take that, and generate an amplified extension of your movement. It still feels like an extension of you. So, you get that satisfying result with a light interaction. So, a good example of this is scrolling, actually. Your finger's only onscreen for a brief amount of time, but the system is preserving all your energy and momentum, and gracefully transferring it into the interface. So, what if it didn't have this?
Those same swipes, well, they wouldn't get you very far. And, in order to scroll, you'd have to do these long, laborious swipes that would require a lot more manual input. It would be a huge pain to use.
Another good example of this is swipe to go home. The amount of time that your finger's onscreen is very light. And, it's-- ends up making it a much more liquid and lightweight gesture that still feels native to the medium of multitouch.
While still being able to reuse a lot of your muscle memory from a button, because you move your finger down on the screen, and back up to the springboard. And, it's not just swipes, it's taps too. It's important for an interface to respond satisfyingly to every interaction.
The interface is signaling to you that it understood you. It's so important for the interface to feel alive and connected to you. So, that's the topic of lightweightness and amplification. The next one is called rubberbanding.
It means we're softly indicating boundaries of the interface. So, in this example, the interface is gradually and softly letting you know that there's nothing there. And, it's tracking you throughout. It's always letting you know that it's understanding you. What happens if you didn't do that? Well, it would feel like this. It would feel super harsh and disconcerting.
You kind of hit a wall there. It would feel broken, right? And, you actually wouldn't know the difference between a frozen phone, and phone that's just at the top of the edge of the screen, right? So, it's really important that it's always telling you that you've reached the edge. And, this applies to transitions, too.
It's not just about when you hit the edge, it's also when you hand off from one thing to another thing. So, a good example of this is when you transition from sliding up the dock to sliding up the app. It doesn't just hit a wall, and one thing stops tracking, and then the other thing takes over. They both smoothly hand off in smooth curves, so that you don't feel like there's this harsh moment where you hand off from one thing to another. Next one is to design smooth frames of motion. So, imagine I have a little object here moving up and down.
It's very simple. But, we all know this object is not really moving, right? We're all just having the perception of it moving. Because we're seeing a bunch of frames on the screen all at once, and it's giving us the illusion of it moving. So, if we took all of those frames of motion, and kind of, spread them out here.
And we see the ball's in motion over time, the thing that we're concerned about is right around here, where there's too much visual change between the adjacent frames. This is when the perception of the interface becomes a little choppy. You get this visual strobing. And, this is because the difference between the two frames is too much. And, it strobes against your vision, so.
Here's an example of where you have two things both moving at 30 frames per second. But the one on the left looks a bit smoother than the one on the right, because the one on the right is moving so fast, that it's strobing. My perception of vision is, kind of, breaking down.
I don't believe that it's moving smoothly any more. So, the important thing to take away is that it's not just about framerate. It's what's in the frames. So, we're kind of limited by the framerate, and how fast we can move and still preserve a smooth motion. So, this one's in 30 frames per second. If we move it up to 60 frames per second, you can see that we can actually go a little bit faster, and still preserve smooth motion. We can do faster movement without strobing.
And, there's addition tricks we can do too, we can do things like motion blur. Motion blur basically bakes in more information in each frame about the movement, like the way your eyes work, and the way a camera works. And you can also do-- take a page from 2D animation and video games by stretching, this-- this technique called motion stretching stretches the content in each frame to provide this elastic look as it moves with velocity.
And so, in motion, it kind of looks like this. So, each of the different techniques, kind of, tries to encode more information visually about what's going on in the motion. And, I want to focus a little bit on this last one here, motion stretching, because we do this on iPhone 10, actually. You know, when you launch an app, the icon elastically stretches down to become the app as it opens.
And, it stretches up in the opposite direction as you close the app. To give you that little bit extra information between each frame of motion to make it a little bit smoother-looking. Lastly, we want to work with behavior rather than animation. You know, things in the real world are always in a state of dynamic motion, and they're always being influenced by you. They don't really work like animations in the animation sense, right? There's no animation curve prescribed by real life. So, we want to think about animation and behavior more as a conversation between you and the object.
Not prescribed by the interface. So, to move away from static things transitioning into animated things, instead think about behavior. So, Nathan's about to dive deep into this one. But, here's a quick example. So, in Photos, there's less mass on the photos, because it's conceptually lighter.
But, then when you swipe apps, there's more mass on the apps. It's conceptually heavier, so we give more mass to the system. So, that's a little bit about how to design interfaces that think and work like us. In-- it starts with response. To make things feel connected to you, and to accommodate the way our minds are constantly in motion. To maintain spatial consistency, to reinforce a consistent sense of space, and symmetric transitions within that space.
And, to hint in the direction of the gesture. To play into our prediction of the future. And, to maintain lightweight interactions, but amplify their output. To get that satisfying response, while still keeping the interaction airy and lightweight. And, to have soft boundaries and edges to the interface. That interface is always gracefully responding to you, even when you hit an edge, or transition from tracking one thing to tracking the other.
And, to design smooth dynamic behavior that works in concert with you. So, that's some principles for how to approach building interfaces that feel like an extension of our minds. So, let's dive in a little deeper. I'm going to turn it over to Nathan de Vries, my colleague, to design motion-- to talk about designing motion in a way that feels connected to motion, to the motion of both you and the natural world.
Thanks, Chan. Hi everyone. My name's Nathan, and I'm super excited to be here today to talk to you about designing with dynamic motion. So, as Chan mentioned, our minds and our bodies are constantly in a state of change. The world around us is in a state of change. And, this introduces this expectation that our interfaces behave the same way, as they become more tactile, it shifts our expectations to be much higher fidelity. Now, one way we've used motion in interfaces is through timed animations.
A button is tapped on the screen, and the reins are, kind of, handed over to the designer. And, their job is to craft these perfect frames of animation through time. And, once that animation is complete, the controls are handed back to the person using the interface, for them to continue interacting. So, you can kind of think of animation and interaction as being-- as moving linearly through time in this, kind of, call and response pattern.
In a fluid interface, the dynamic nature of the person using the interface kind of shifts control over time away from us as designers. And, instead, our role is to design how the motion behaves in concert with an interaction. And, we do this through these continuous dynamic behaviors that are always running, that are always active. So, it's these dynamic behaviors that I'm going to, really focus on today. First of all, we're going to talk about seamless motion. And, it's this element of motion that makes it feel like the dynamic motion is an extension of yourself.
Then, we're going to take a look at character. How, even without timing curves, and timed animations, we can introduce the concept of playfulness, or character, or texture to motion in your interfaces. And finally, we'll look at how motion itself gives us some clues about what people intend to do with your interface.
How we can resolve some uncertainty about what a gesture is trying to do by really looking at the motion of the gesture. So, to kick things off, let's look at seamless motion. What do I mean by seamless motion? So, let's look at an example that I think we can all familiarize with. So, here we have a car, and it's cruising along at a constant speed. And then, the brakes are applied, slowing it down to a complete stop.
Let's look at it again, but this time we'll plot out the position of the car over time. So, at the very start of this curve it's, kind of, straight, and pointing up to the right. And, this shows that the car's position is moving at a constant rate, it's kind of unchanging. But then, you'll notice the curve starts to bend, to smoothly curve away from this straight line. And, this is the brakes being applied.
The car is decelerating from friction being introduced. And, by the end of the curve, the curve is completely flat, horizontal, showing that the position is now unchanging. That the car is stopped. So, this position curve is visualizing essentially what we call seamless motion.
The line is completely unbroken, and there are no sudden changes in direction. So, it's smooth and it's seamless. Even when, actually, new dynamic behaviors are being introduced to the motion of the car, like a brake, which is applying friction to the car. And, even when the car comes to a complete stop, you'll notice that the curve is completely smooth. There's this indiscernible quality to it.
You can't tell when the car precisely stopped. So, why am I talking about cars? This is a talk about fluid interfaces, right? So, we feel like the characteristics of the physical world make for great behaviors. Everyone in this room finds the car example so simple because we have a shared understanding, or a shared intuition for how an object like a car moves through the world.
And, this makes us a great reference point. Now, I don't mean that we need to build perfect physical simulations of cars that literally drive our interface. But, we can draw on the motion of a car, of objects that we throw or move around in the physical world around us and use them in our own dynamic behaviors to make their motion feel familiar, or relatable, or even believable, which is the most important thing.
Now, this idea of referencing the physical world in dynamic behaviors has been in the iPhone since the very beginning with scrolling. A child can pick up an iPhone and scroll to their favorite app on the Home screen, just as easily as they can push a toy car across the floor. So, what are some key, kind of, characteristics of this scrolling, dynamic behavior that we have? Well, firstly it's tapping into that intuition, that shared understanding that we all have for objects moving around in the world.
And, our influence on those objects. The motion of the content is perfectly seamless, so while I'm interacting with it, while I'm dragging the content around, my body is providing the fluidity of the movement, because my body is fluid. But, as soon as I let go of the content, it seamlessly coasts to a stop. So, we're kind of maintaining the momentum of the effort being put into the interface. The amount of friction that's being used for scrolling is consistent, which makes it predictable, and very easy to master.
And finally, the content comes to an imperceptible stop, kind of like the car, not really knowing precisely when it came to a stop. And, we feel that this distinct lack of an ending kind of reinforces this idea that the content is always moving, and always able to move, so while content is scrolling, it makes it feel like you can just put your finger down again, and continue scrolling.
You don't have to wait for anything to finish. So, there are innumerable characteristics of the physical world that would make for great behaviors. We don't have time to talk about them all, but I'd like to focus on this next one, because we personally find it incredibly indispensable in our own design work. So, materials like this beautiful flower here, the natural fibers of this flower have this organic characteristic called elasticity.
And, elasticity is this tendency for a material to gracefully return into a resting state once stress or strain is removed. Our own bodies are incredibly elastic. Now, we're capable of running incredibly long distances, not because of the strength of our muscles, but because of their ability to relax. It's their elasticity that's doing this. So, our muscles contract and relax once stress and strain is removed. And, this is how we conserve energy. Makes us feel natural and organic. The same elasticity is used in iPhone Tap an icon on the Home screen, and an elastic behavior is pulling the app towards you.
Bring it exactly where you want it to be. And, when you swipe from the bottom, the app is placed back on the Home screen in its perfect position. We also use elasticity in scrolling. So, if I scroll too far and rubberband, like Chan was talking about, when you let go, the content uses elasticity to pull back within the boundaries, helping you get into this resting position, ready for the next time you want to scroll.
So, let's dig in a little deeper on how this elasticity works behind the scenes. You can think of the scrolling content as a ball attached to a spring. On one end of the spring is the current value. This is where the content is on the display. And, the other end of the spring is where the content wants to go because of its elasticity.
So, you've got this spring that's pulling the current value towards the target. Its behavior is influencing the position of the content. Now, the spring is essentially pulling that current value towards the target. And, what's interesting about a spring is, it does this seamlessly. This seamlessness is, kind of, built in to the behavior. And, this is what makes them such versatile tools for doing fluid interface design.
Is that you, kind of, get this stuff for free. It's baked in to the behavior itself. So, we love this behavior of a value moving towards a target. We can just tell the ball where to go, and we'll get this seamless motion of the ball moving towards the target. But, we want a little bit more control over how fast it moves.
And, whether it overshoots. So, how do we do that? Well, we could give the ball a little more mass, like make it bigger, or make it heavier. And, if we do that, then it changes the inertia of the ball, or its willingness to want to start moving.
Or, maybe its unwillingness to want to stop moving. And, you end up with this little overshoot that happens. Another property that we could change is the stiffness of the spring, or the tensile strength of the spring. And, what this does, is it affects the force that's being applied to the ball, changing how quickly it moves towards the target. And, finally, much like the car, and the braking of a car, we can change the damping, or the friction, of the surface that the ball is sitting on.
And, this will act as, kind of, a brake that slows the ball down over time, also affecting our ability to overshoot. So, the physical properties of a ball and a spring are, kind of, physics class material, right? It's super useful in a scientific context, but we've found that in our own design work they can be a little bit overwhelming or unwieldy for controlling the behavior of objects on the screen.
So, we think our design tool should have a bit of a human interface to them. That they need to reflect the needs of the designer that's using the tool. And so, how do we go about that? How do we simplify these properties down to make it more design friendly? So, mass stiffness and damping will remain behind the scenes, they're the fundamental properties of the spring system that we're using. But, we can simplify our interface down to two simple properties. And, this controls how quickly the value will try and get to the target.
And, you might notice that I haven't used the word duration. We actually like to avoid using duration when we're describing elastic behaviors, because it reinforces this concept of constant dynamic change. The spring is always moving, and it's ready to move somewhere else. Now, the technical terms for these two properties are damping ratio and frequency response. So, if you'd like to use these for your own design work, you can look up those terms, and you'll find easy ways to convert them.
So, we now have these two simple properties for controlling elastic behaviors. But, there's still an infinite number of possibilities that we can have with these curves. Like, there's just hundreds, thousands, millions of different ways we can configure those two simple properties and get very different behavior. How do we use these to craft a character in our app? To control the feel of our app? Well, first and foremost, we need to remember that our devices are tools.
And, we need to respect that tools, when they're used with purpose, require us to not be in the way, not get in the way with introducing unnecessary motion. So, we think that you should start simple. A spring doesn't need to overshoot. You don't need to use springy springs. That way you'll get smooth, graceful, and seamless motion that doesn't distract from the task at hand. Like, just quickly shooting off an email. So, when is it appropriate to use bounciness?
There's got to be a time when that's appropriate, right? Well, we feel if the gesture that's driving the motion itself has momentum, then you should reward that momentum with a little bit of overshoot.
Put another way, if a gesture has momentum, and there isn't any overshoot, it can often feel broken or unsatisfying to have the motion follow that gesture. An example of where we use this is in the Music app. So, the Music app has a small minibar representing Now Playing at the bottom of the screen, and you can tap the bar to show Now Playing. Bounciness can also be used as a utility, as a functional means.
It can serve as a helpful hint that there's something more below the surface. With iPhone 10, we introduced two buttons to the cover sheet for turning on the Flashlight, and for launching the Camera. To avoid accidentally turning on the flashlight by mistake, we require a more intentional gesture to activate the Flashlight. But, if you don't know that there's a more intentional gesture needed to activate it, when you tap on the button, it responds with bounciness.
Has this kind of playful feel to it. And, that hint is teaching you not only that the button is working, but that it's responding to you. But, it's kind of teaching you that if you press just a little bit more firmly, it'll activate. It's like teaching you. It's hinting in the direction of the motion. So, bounciness can be used to indicate this kind of thing. Now, so far we've been talking about using motion to move things around, or to change their scale, change their visual representation on the screen.
But, we perceive motion in many different ways. Through changes in light and color, or texture and feel. Or even sound. Many other sensations that we-- our senses can detect. We feel this is an opportunity to go even further, go beyond motion, when you're tuning the character of your app. By combining dynamic behaviors for motion with dynamic behaviors for sound and haptics, you can really fundamentally change the way an interface feels.
So, when you see, and you hear, and you feel the result of the gesture, it can transform what was otherwise just a scrolling behavior into something that feels like a very tactile interface. Now, there's one final note I want you thinking about when you're crafting the character of your app. And, that's that it feels cohesive, that you're staying in character. Now, what does this mean? So, even within your app, or across the whole system, it's important that you treat behaviors as a family of behaviors.
So, in scrolling for example, when I scroll down a page, using a scrolling behavior, and then I tap the status bar to scroll to the stop of the page, using an elastic behavior. In both cases, the page itself feels like it's moving in the same way, that it has the same behavior, even though two different types of behaviors are driving its motion, are influencing its motion.
Now, this extends beyond a single interaction like scrolling. It applies to your whole app. If you have a playful app, then you should embrace that character, and make your whole app feel the same way. So, that people-- once they learn one behavior of your app, they can pick up another behavior really easily, because we learn through repetition. And, what we learn bleeds over into other behaviors. So, next up, I'd like to talk a little bit about aligning motion, or dynamic motion, with intent.
So, for a discrete interaction like a button, it's pretty clear what the intent of the gesture is. You've got three distinct visual representations on screen here. And, when I tap one of them, the outcome is clear. But, with a gesture like a swipe, the intent is less immediately clear. You could say that the intent is almost encoded in the motion of the gesture, and so it's our job, our role, to interpret what the motion means to decide what we should do with it. Let's look at an example.
So, let's say I made a FaceTime call, a one-on-one FaceTime call, and in FaceTime, we have a small video representation of yourself in the corner of the screen. And, this is so I can see what the person on the other end sees. We call this floating video the PIP, short for picture in picture. Now, we give the PIP a floating appearance to make it clear that it can be moved.
And, it can be moved to any corner of the screen, with just a really lightweight flick. So, if we compare that to the Play, Pause, and Skip buttons, like, what's the difference here? So, in this case, there's actually four invisible regions that we're dealing with. No longer do we have these three distinct visual representations on screen that are being tapped. We kind of have to look at the motion that's happening through the gesture, and intuit what was meant.
We develop systems for supporting behavior change an We are working to build technology that interfaces with the sleeping mind. As the dreamer descends into sleep, we track different sl Using state of the art small, wearable sensors and powerful machine learnin Increasingly, seminal decisions affecting personal and societal futures are made by people assisted by Artificial Intelligence AI Find People, Projects, etc. Group Contact: Holly Birns fluidadmin media.
Fluid Interfaces is not currently accepting new students. Below you will find the 7 theme… View full description.
Skip to content. Star 2. This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. Branches Tags. Could not load branches. Could not load tags. Latest commit. Git stats 77 commits. Failed to load latest commit information. View code. The presentation is available here: Designing Fluid Interfaces It was an outstanding talk, inspiring designers and developers to think about animated interfaces in a new way.
Installation Download or clone the repo and open the FluidInterfaces. Releases No releases published. Packages 0 No packages published. You signed in with another tab or window. The benefit of this approach is that the interface can be used to create configurations of objects that can fork off from a particular point; Allowing two or more objects to share a certain amount of state, and be used further without interfering with each other. In typed languages, using a constructor requiring all parameters will fail at compilation time while the fluent approach will only be able to generate runtime errors, missing all the type-safety checks of modern compilers.
It also contradicts the " fail-fast " approach for error protection. Single-line chained statements may be more difficult to debug as debuggers may not be able to set breakpoints within the chain. Stepping through a single-line statement in a debugger may also be less convenient. Another issue is that it may not be clear which of the method calls caused an exception, in particular if there are multiple calls to the same method. These issues can be overcome by breaking the statement into multiple lines which preserves readability while allowing the user to set breakpoints within the chain and to easily step through the code line by line:.
However, some debuggers always show the first line in the exception backtrace, although the exception has been thrown on any line. To log the state of buffer after the rewind method call, it is necessary to break the fluent calls:. This can be worked around in languages that support extension methods by defining a new extension to wrap the desired logging functionality, for example in C using the same Java ByteBuffer example as above :.
Languages that are capable of expressing F-bound polymorphism can use it to avoid this difficulty. Note that in order to be able to create instances of the parent class, we had to split it into two classes — AbstractA and A , the latter with no content it would only contain constructors if those were needed. The approach can easily be extended if we want to have sub-subclasses etc.
In a dependently typed language, e. Scala, methods can also be explicitly defined as always returning this and thus can be defined only once for subclasses to take advantage of the fluent interface:. From Wikipedia, the free encyclopedia. This article is about the API design pattern. For Microsoft's visual design language, see Fluent Design System. Contains "a". ToUpper ;.
FirstName "vinod". LastName "srivastav". Sex "male". Address "bangalore". NONE ; p. Allocate Get 0. Limit ;. Retrieved 13 November Retrieved Software design patterns. Abstract factory Builder Factory method Prototype Singleton. Design Patterns Enterprise Integration Patterns. Categories : Software design patterns.