Hands-on with Brightkit

>> 14 December 2008

Update 9 Mar 2009: Brightkit is now HootSuite and has continued to add and improve features.

Brightkit bills itself as “the ultimate Twitter toolbox” that “lets you manage your entire Twitter experience.” I've been using Brightkit for about a week and I like the way it combines services. So far I'm still using other clients for reading, but Brightkit has already become my main authoring and management client. Here are some features I've been enjoying.

Add accounts
Brightkit's dashboard makes it easy to work with my three Twitter profiles. I can switch easily among profiles to monitor tweets and follow, reply, re-tweet, send direct messages. Each keeps its own history of pending and sent tweets, @replies, direct messages, and saved searches. Brightkit says they're working on tools to track and manage friends and followers, capabilities it needs to become a complete Twitter presence and reach management application.

Schedule tweets
I tend to generate tweet-worthy observations in batches, but I want to maintain a steady Twitter presence when people I care about are watching. With Brightkit I can schedule tweets at five-minute intervals starting at now +15 minutes. I can delete, edit, or change the send time of the pending message. Brightkit will (optionally) e-mail me when my tweet is sent.

On the downside you can only post your tweet to one account at a time. My profiles have very different goals so so far this hasn't been an issue for me, but it's easy to imagine situations that would benefit from this.

Monitor reach with link shrinking and tracking
Brightkit integrates Tweetburner-like shrinking and tracking capabilities into the message editor where it automatically adds the link and updates the character counter. Stats report daily click counts with more stats on the way.

Add editors
You can share authoring responsibilities for a profile without giving people access to all the profiles or to Brightkit administrative functions – useful if you have a social media presence team.

Search and save keywords
Monitor presence and buzz with saved searches for each Twitter profile. Go beyond keywords to find messages containing hashtags or links, sent before or after a certain date, or to, from, or about a Twitter profile. I'd like to be able to monitor streams side-by-side as in Tweetdeck or Monitter, but if I want to check periodically or cycle through topics the current feature works fine.

All of this comes wrapped in a open, Web 2.0-ish user interface, which I'll review in a later post.

Read more...

Twitter holiday helpers

>> 07 December 2008

These Twitter-based services will help you save money and stay organized through the retail (er, Christmas) season and beyond.

Find a Wii or WiiFit

Those of us shopping for a Wii or WiiFit (and who isn't?) need all the help we can get. These Twitter bots keep checking major retailers and tweet links when they detect available stocks.

  • WiiTracker (twitter.com/WiiTracker) sends updates like this:
  • Costco.com has the Nintendo Wii Super Mario Galaxy Bundle in-stock for $350 http://tinyurl.com/ytfnmd 11:11 AM Dec 5th from twitterfeed
  • NowInStock (twitter.com/NowInStock) and NISWiiFit (twitter.com/WiiFit) check stocks
    at Amazon, Best Buy, Walmart, and other retailers. (www.nowinstock.net). You'll get a message like this one:
  • Nintendo Wii Fit in stock for $89.99. Go here http://tinyurl.com/6rab3l 2:00 AM Nov 28th from web

Get a PC or laptop deal

Dell Outlet announces deals via Twitter at twitter.com/delloutlet. For example,

20% off any Outlet Inspiron™ Mini. Enter code at checkout: SDN2Z85FXG5XD9 http://tinyurl.com/6gk7cq - expires 11:59PM CT 12/8. Online only 7:06 AM yesterday from web
New offers posted for Season of Savings this morning. 15-20% off select Dell Outlet products: http://tinyurl.com/5tamne 9:44 AM Dec 4th from web

Track gift shipments

TrackThis (twitter.com/trackthis) sends you package tracking updates by direct message. Send TrackThis a direct message with the FedEx, UPS, USPS, or DHL tracking code first and a nickname for the package, like this:

123456789123 Macbook Air

They'll send you a direct message each time your package changes location.

trackthis Macbook Air: In Transit To: Lousville, KY 09:25 am April 16, 2008

Have a holiday secret you can't wait to share?

Spill the beans to SecretTweet (secrettweet.com). The secret will be posted publicly but completely anonymously at both secrettweet.com and twitter.com/secrettweet.


Tweet Sheet


Finally, get Jason Theodor's super-portable, super-simple "Tweet Sheet" so you'll always have the command list with you.

Read more...

Chicago Agile Project Management meetup

>> 29 November 2008

If you are an Agile practitioner you might be interested in the Agile Project Management meetup. The meetup is new with 40+ members and meets monthly.

Dec 2nd's topic is user story life-cycle patterns from backlog to production, a topic of interest to business stakeholders and business analysts in addition to project managers and engineers.

Get information at Chicago Agile Project Management at Meetup.

Read more...

Game-day: your clutter is not my clutter

>> 01 September 2008

37Signals' Ryan Singer joined the controversy over the TripLog/1040 iPhone app's UI. Critics have faulted it as cluttered and, not to put too fine a point on it, ugly. Ryan points out that the designer made a concious tradeoff in order to meet user requirements for fast access to the two most common actions.

TripLog/1040 is a model of concision compared to real-time game trackers now common on professional sports Web sites such as the NFL Game Centers, the PGA shot tracker, and MLB Gameday. These applications provide a dynamic dashboard that gives direct access to scores, lineups, play-by-play in text and graphics, individual and team stats, and all manner of other information depending on the sport. Although they aren't mobile apps like TripLog/1040, they do illustrate that sometimes a UI needs to have a lot going on if it's going to meet users' expectations.

These applications are targeted at the needs of serious (read motivated and engaged) fans, whether of a team or a sport. They assume the viewer has a certain minimum of domain knowledge, enough to expect certain kinds of information. Furthermore, sports have long-standing conventions for representing game data with which they viewer may be assumed to be familiar. Event viewers are accustomed to processing several streams of information as they watch a game or event - at minimum the action and the scoreboard, with additional information that varies with venue or medium. Many also have their own collection of facts, stats, and other contextual information at the ready, either in their own memories or their co-viewers'. Some watch multiple events at the same time, through premium broadcast packages or using picture-in-picture, or simply timeslicing their attention to various broadcasts.

Game-tracking applications reproduce this rich information context on the Web. As a standalone experience, the applications re-create the suspense, anticipation, and armchair analysis parts of the sports-watching experience. They can also supplement individual or shared viewing experiences as information resources. Because the user doesn't have to navigate a page stack, the user can enjoy the experience rather than hunt - or wait - for hidden information. On the other hand, supplemental information is available on demand. Fans can explore it at will, for example, during breaks in the action, to get context for the next action, or to try to anticipate a team's next strategic moves.

If these applications were less "cluttered" they would also be less useful, possibly to the point of failing utterly to achieve their purpose. Game-day applications intentionally create a rich information context that match the needs and characteristics of the audience, and masterfully leverage the existing cultural context in which sports are played and experienced.

Read more...

The road is paved with good intentions

There's an interesting discussion going on at Chad Meyers' blog about quality in both design and execution. Chad argues there is such a thing as Good Design, that knowledgeable observers can reliably discern its presence, and that "but it works!" is not a defense or justification for bad design choices.

The bad design characteristics Chad spotlights prevent teams from responding effectively to demands on the application. With each release the team is less and less able to respond until they are locked in to long projects that deliver little incremental value. How do teams start down the path that leads there?

Chad's post hinted at an important consideration that doesn't seem to get enough attention. Too often people make fast delivery of finished features the top priority, then define everything else as speeding up or slowing down delivery. The problem is that driving to "finished features" (where finished = fully realized) prevents the team from engaging in "fast fail, fast feedback" cycles. If there is also a stated desire to work in an "agile" fashion the team might try to marry the "finished feature" driver with a short-timebox approach with disastrous results. On a large project few fully-realized features can be started and delivered in a single 2-4 week period.

The Agile team will look for a way to decompose the desired end feature into meaningful chunks and distribute the work across the team such that each 2-4 week period delivers a coherent, complete piece of functionality. The wish-we-were-agile team calls on the team to keep the feature "simple," not to "gold-plate" it. Once headed down this path all kinds of things become gold-plating - error handling, usability, even the goal of working software can be set aside under the banner of producing "good enough" software quickly.

Of course there is nothing wrong with "simple," "good enough," or "rapid delivery." Problems arise when the team identifies "simple" feature realization and "simple" process with undesigned, untested and incomplete. The team might find itself choosing to hack up a feature, not because it's a justifiable approach given the business needs but because their inability to decompose the feature and plan across iterations has created an artificial pressure to cross the feature off the project plan. That team can report that features are "done" but pays the price later when the customer doesn't accept the feature or the market demands more.

Read more...

Done-ness, Denial, and Golf

>> 31 August 2008

Totally Test-driven Scrum Delivers Better Objectivity In Measuring Real Progress - Agile Software Process Improvement
Progress is measured entirely in terms of story points collected for tests passed. There is no "A for Effort" in my way of doing things. It's either testably delivered or it's not.

The analogy I use - and, yes, it's a golfing analogy - is to differentiate by measuring the completeness of a hole by whether or not the players took the shots they planned to vs. whether or not the ball actually finished up in the hole.
Jason Gorman puts his finger squarely on one of the keys to software development success. Performing tasks only matters if they result in complete and correct work product.

Jason's golf metaphor conveniently illustrates a couple of related antipatterns, "close enough" and "pick it up after 10 strokes."

In "close enough," someone makes a decision that whatever is currently implemented seems pretty done, so the developer should move on and try to get something else implemented. This is like finding yourself on the green putting for double bogey, and deciding to stop and move on to the next tee. You cover all 18 holes in 4 hours, but you can't post a score because you haven't really played the round.

In "pick it up after 10" the team concedes its inability to bring a feature to testable completion. Just like a course policy to keep people moving around the course, the software team decides that they have spent all the time they can on a feature and must move on. Like the 10-stroke golf hole, they've used more time than they expected and didn't achieve the required result, but they figure they've done all the can in the time allowed. If the customer is lucky, the 10th stroke will be on the green close to the hole. If not, the 10th stroke will fail to clear the edge of the bunker and trickle back into the sand.

I am still learning about Agile methods, but I have to believe the Agile team would have a couple of responses. "Close enough" is essentially a quality issue; the team thinks it's implemented something acceptable even if not what was asked for or agreed to. If the product owner accepts the close enough solution, just like in the cartoons the hole can slide over under the ball so it can drop in. Voila! No longer close enough, the feature is now done.

"Pick it up after 10" seems like a reasonable strategy on the surface. It's the golf equivalent of timeboxing. Teams get into trouble only if they mark the feature done and check it off the list. Seems to me the Agile team might talk it over with the customer and decide they're only go to play the front nine today. Playing the hole to completion will mean they can't finish the back nine, and the customer just can't accept the ball-in-the-bunker solution. Teams create problems for themselves when they pick it up and move on to the next hole, card a 10, and don't tell anyone about the feature's unfinished business.

Closely related to "pick up after 10" is the mulligan. In some respects mulligans are necessary, even a mark of good practice, in software development. You learn from the failed attempt, and try again. Just as in golf, though, mulligans don't show up on the score card. The proud player strides off the 18th green tallying up his 90-something score, but everyone in the group knows Mr. Mulligan couldn't have broken 100. On software projects there's a lot more at stake than who buys the first round. Unreported mulligans eat up time, and worse, keep the team from assessing its own likely performance on future efforts. Mulligans often lead inevitably to "close enough" and "10-stroke limit" situations.

Ah, golf. You can fool yourself if you like, but the rattle of the ball in the cup is what the game is all about.

Read more...

Betting on Scrum

>> 23 August 2008

It's been a year since my ScrumMaster training - time to renew my Scrum Alliance membership. Time to consider the value of membership. Time to consider my place in the Agile community. Time to decide if, as an experience designer, I can find a place in that community!

First, what did I gain for my original investment of $1300 out of pocket and a couple of vacation days?

  • I discovered Jeff Patton's excellent work. He offers one of the best collections of experience design resources I've found at his site, AgileProductDesign.com. The discussion list he moderates on agile usability attracts leaders in both fields.
  • I'm happy to work in a framework that values empathy and story in addition to analysis and research.
  • I like working with and learning from engineers. I'm glad I get to talk with them more.
  • I'm passionate about enabling real people to do real things, and with tools that come easily to hand and leave them happy. Seems like lots of Agile engineers are, too. I like that.
I've found some things less congenial.
  • I'm tired of "how many agilists can dance on the head of a pin" discussions. Who's really doing Agile? What methods are really Agile? Can you be an Agile practitioner if you don't produce code? Which Agile guru speaks with the most authentic voice? Can you trace your Agile sensei's pedigree through an unbroken chain of master and student to the founders? Are you still Agile if you learned from someone else? What does the Manifesto mean and whose interpretation is Agile?
  • The Agile world does not seem to be a comfortable place for user experience practitioners, still less for business analysts. I have been stunned by the number and intensity of insulting, patronizing, and downright hostile opinions I have seen expressed. Manifesto values like respect have at times been glaringly absent.
  • Scrum is a hard practice to evangelize. Managing a product backlog or scrum backlog seems to require discipline at least as great, if not greater, as does "traditional" project planning. It's more structure than people who don't like project management overhead want, and people who have to pass regulatory audits are afraid it's not enough.
The software world has never been without its religious wars, and you can't please all the people all the time. Experience designers, product people, and engineers eventually will find their places under the Agile umbrella.

And somewhere there's a dogma-free Web app team full of people passionate about enabling people to express their personal and professional power through software - and room for one more like them.

At least, I just bet $50 on the possibility!

Read more...

Designing for the Edges

"Explore your edge cases for the sake of innovation."
Nick Finck is just one of the prominent designers who see the value in edge cases. On the other hand, the casually dismissive "Oh, it's just an edge case!" is all too commonly heard. Before tossing out your edge cases altogether, ask these questions:
  • Is the case a user goal believed to be shared by only a few users?
  • Is the case created when technical limits prevent the application from fulfilling a user goal?
  • Is the case created when users interact with the application in unexpected ways?

If the answer is "Yes," the edge case indicates the opportunity to introduce delighters/exciters, usability improvements, technical innovation, or trend-anticipating product innovation. Capture this information somewhere for follow up if you must drop the case from your current design effort.

I saw these factors in action some time ago when I was designing a new, much-requested, feature. One customer had what some believed to be a novel requirement. The requirement seemed to make business sense. The solution would extend an existing feature and require the addition of a minor option to the feature I was designing. Even so, my business stakeholders asserted this customer's request was wholly unique. They were sure no other customer would use the new option because, the stakeholders believed, none of them used our application the way this customer did.

I decided to inquire further. After all, these same people had asked us to invest a significant part of the project budget to design a different capability targeted at the very use pattern they now were sure was used by only one customer. It didn't add up.

Sure enough, there were customers who had asked for something similar. Engineering had not anticipated their requirement in their original design. Later, engineering removed a configuration option from a related feature. Without the configuration option, the first feature was unusable.

Someone had figured out a hack that mimicked the desired behavior, a hack that was subsequently employed for other customers. The hack was "good enough" - until a customer needed both the original feature and the flawed related feature. The workaround obscured an underlying limitation in our application that could have been easily remedied at any time.

This "edge case" qualified on all three criteria: people believed the request was idiosyncratic, a technical limitation prevented solving a customer's business problem, and the need to solve the business problem was unexpected. By repairing the flawed feature we could serve the insistent customer to their satisfaction, and deeper-than-face-value analysis revealed prospective customers would likely appreciate the feature, too.

Read the 4-part series Designing for the Edges at Functioning Form, where Luke Wroblewski collected a number of perspectives on the value of edge cases.

Read more...

About

Faith Peterson is a versatile business analyst and user experience designer practicing in and around Chicago, IL. She works on Web-enabled business applications for content management, digital asset management, and business process management.

  © Blogger template Digi-digi by Ourblogtemplates.com 2008

Back to TOP