Mouse Events

While keyboard input is one of the more obvious ways our programs have thus far interacted with users, the other fundamental means of user input is provided via the mouse. The software you've designed up to this point may not have simply done anything with that information, though. Which is a shame, because there's a lot of really subtle ways you can encourage a sense of immersion when your software considers and reacts to what the mouse is doing.

For example, one of the simpler ways to build more reactive software is to have certain aspects of the UI change when a user moves the cursor to hover over a particular element. This could be the (?) symbol next to a text field that pops up more information about what is being asked, or having a clickable component of the UI change color, flash, or otherwise visually announce that this component can be manipulated by the user. Think about the top navigation of many web sites you'll visit (including the NIU homepage), where hovering the cursor over "A-Z Index" or "Academics" places a white border on those elements, notifying the user that this element can be manipulated. Most likely through a button click.

This is a great design technique, because it removes the necessity for there to be a constant highlight on these features to signify their possible use, allowing for the navigation bars to contain much less "noise" or clutter. And when the rest of your webpage is as beautiful as the NIU homepage is, you don't want to saturate the user's view with clutter.

Another great example of this is when you actually click on something that is functionally a button. Take a closer look at the NIU homepage again, and observe the change from when you hover over "Academics" to when you press (but not release) the mouse button. The white box contracts slightly, and a red line appears at the bottom of the interior of the box. This is meant to depict how an actual real-life button will become depressed when physically pushed down, changing how it looks to (1) include a slight shadow over the face of the button and/or (2) reveal some interior part of the physical board that you can only see when you have pressed the face of the button down.

There's a fair amount of code behind making this functionally useless aspect of the UI, just so that buttons in the digital world behave like they do in the real world. But at the same time, that's exactly the point. We want the user to feel like the UI they're interacting with is not a digital one they cannot physically touch but something their "digital" reach (keyboard and mouse) can interact with, in exactly the same way their physical reach (fingers) can. This helps provide that sense of immersion that separates the good UX from the great.

There are several types of mouse events that we can tie to specific responses from our software. Groups of them are associated with a single user action, though. For example, when I single-click on an element within my Form, the following events are triggered (not that you need to define a reaction to each and every one of them):

  1. public event MouseEventHandler MouseDown

  2. public event MouseEventHandler Click

  3. public event MouseEventHandler MouseClick

  4. public event MouseEventHandler MouseUp

All of which will utilize a MouseEventArgs structure that will contain information about the mouse cursor at the time of the event. The Properties include X and Y (the X and Y coordinates of where on the screen the event was triggered); Location (a Point structure containing the X and Y coordinate values of where on the screen the event was triggered); Button, indicating which button was pressed (values derived from the MouseButtons enumeration, containing Left, Right, Middle, None, etc.); Clicks, the number of times the button was pressed and released; and Delta, the number of notches ("detents") by which the mouse wheel has been rotated.

There's also MouseEnter to detect when the mouse cursor enters the border or client area of the control, which will use an EventArgs structure. MouseHover for when the mouse cursor stops and rests over the control (also uses an EventArgs structure). And MouseLeave for when the mouse cursor leaves the border or client area of the control. The combination of these three can be used to give animation and responsiveness to your UI elements. While it won't be necessary to provide any level of these implements for all components of your UI, in the places where it makes sense (e.g. your buttons, your navigation tools, anywhere the user is expected to provide input, or where the user can interact with the UI in some way), it will go a long way towards creating a more immersive UX.