I run across this a lot. When I was in a company implementing this stuff, and now as an industry analyst researching how people are doing this and guiding people toward success.
I don't think there is a one-size answer, since since companies face greater risks by doing and other by not-doing (crimes of commission vs. crimes of omission). Regulations about internal records management are a serious concern for many companies -- and the newer Ent 2.0 tools have not yet caught up with records retention, labeling, and auditing requirements that may apply to some classes of document. So implementing a Clearspace, Atlassian, Awareness, etc. in this kind of environment will raise some real issues about what goes on there and how do we know it's OK.
Note: I wish to be vendor agnostic here, I hope you understand that I'm not picking on these vendors, but alluding to the difference between these newer players that have very compelling Ent 2.0 features, versus those content-management type tools that are compliant with DoD 5015.02-STD, April 2007 (Records Management Compliance) standard http://jitc.fhu.disa.mil/recmgt/register.html or other standards relevant to things like discoverability, records privacy (in heath-care), etc. or have a partner ecosystem of add-on tools that scan their databases for content violations -- something that is relevant to some, and could pose a barrier to adoption for the non-compliant vendors who are trying to sell into these markets.
Note also: Ent 2.0 in these environments can be successful -- but an acceptable-use policy statement alone is not going to be enough for these kinds of situations where to cost of a mistake is very high.
Whereas for many others who have a different risk profile, they have to seriously consider the cost of failing to collaborate effectively, and what is the likelihood of a real problem taking place? A policy might help clarify what you want to see happen, but a good example goes a long way to helping people understand what truly is acceptable use. BTW, I recently blogged about a way to think about this (lower-risk) type of case for internal blogging policies. http://blogs.forrester.com/information_management/2008/07/blogging-at-wor.html. I think many are shy to try Ent 2.0 tools because it's an unknown and they are not comfortable with the newness -- but they need to invest more trust in the fact that most employees really want to do the right thing, they just need to know what it is, and see good examples they can copy.
I'd welcome more comments and discussions about this, since I think this is an issue for many companies who are just starting to dip their toes.
We went through this same discussion for quite some time on our implementation. We are a publicly traded US firm with ~17k employees, so needless to say everyone and their brother had an opinion. In terms of acceptable use, we fought long and hard to not have anything specific to the environment, but instead to extend existing IT AUPs. When looking at our existing IT AUP, it was pretty hard to come up with use cases that would fall outside of its jurisdiction.That way adherence, regardless of environment, was sort of tacitly implemented.
Secondly we pushed hard to leverage culture, more than policy. Encourage users to comment on objectionable content and mentor the authors in a productive way. My sense is that people would have insticitively recoiled from participation if the heavy hand of "Thou shalt not blog about..." was their starting point. Instead, we encourage more participation by offering quick and easy "management opportunities" to any user. We sort of went with three tenets:
- Don't be an idiot.
- If you are an idiot, expect to get guidance on how not to be in the future.
- Proactively help other people not break Rule 1.
That is kind of a gross oversimplifcation, but it has worked so far.
If there is one phrase you want to avoid at all costs it is "Police" (unless in reference to Stewart Copeland). Nobody will be encouraged to tear down organizational silos through transparent collaboration and constructive dissent if they are feel there is an armed force of sorts waiting to throw them in jail. Also, treating users like possible technology felons seems distrustful and not conducive to participation. That is the point, right?
W/re to the comments on records retention, I think acceptable use and records retention are two sides to the same coin. "Acceptable Use" in my opinion is more about guiding the act of contribution. "Records Retention" or similar topics seem more about storing the results of contribution. Overzealous records retention policies can diminish use in general, whether acceptable or not. Conversely, Laiseez faire AUPs may burn you on regulatory compliance. Your industry will probably drive which side of the coin turns up more often.
Since I know that some companies make a good living extracting and organizing email traffic in the context of lawsuits, I know there will be good money to be made -- after the fact -- in organizing collaboration related content in the event of lawsuits. That said, is it correct to infer that some companies don't implement all possible safeguards against potentially liable internal communications because the costs, direct and indirect, of doing so would be so prohibitive? It remnds me of what companies do followng a total network security analysis - many don't implement all possible voluntary tactics simply because the costs are so great. Does this suggest that a workplace collaboration system with legal compliance built in such as you suggest might meet with some market acceptance?
It seems to me that if part of your justification for implementaing collaborative technologies includes the ability to search created content, you're sort of stuck with having to deal with retention and compliance issues, right?
Also, I like your three rules!
I think it is a reasonable inference that if a company can make money extracting and discovering emails in the event of litigations, these kind of companies ought to consider adding services that extract content from wikis, forums, blogs, microblogs, IM Chat logs, community and social networking sites. As long as content of interest is put there, someone will profit in the venture to find it. The classic case is the "smoking gun memo" where someone internally reveals some concern about a product, and in the event with a real problem this communication is used to say "you see they knew that this product was defective, and sold it anyway..."
So at the end of the day, the lawyers (and their extended ecosystem) stand to make money. But the user company who purchases the Ent 2.0 software still has to assess and mitigate their risks. A published policy document is a mitigation, but not enough for some. I believe in an empowered community-leader role to establish and encourage behaviors. Improvement in the tools will help quite a bit too.
As for your other question -- A vendor should consider how expensive it would be to put in these features to comply with the relevant specs, and then assess the biz opportunity (how many deals were lost based on the lack of this, how many new opportunities would be opened up in new markets, with new partners, etc.). Risk issues are a barrier that can be addressed with technology and with behavior. Technology features in this space can really move the ball forward toward your goal. But I'd love to see tools that have the magical mix of great features and a compelling risk mitigation story.
Vendors -- show us the great stuff you are working on! We'd love to see the win-wins.
Gil, your responses to this question are excellent, and well documented. There is a fact, however, that seems to go untouched and which I would like to get your opinion on: otherwise innocent transgressions that are magnified by collaboration systems, hurting not only the company, but SPECIALLY the collaborators. In that respect, corporate policies are grossly unbalanced.
The example of the product failure memo offered in one of the posts hints in that direction. You may be OK talking about the product defect with your boss, casually, in your office or hers, over a cup of tea, because even if the defect turns out to be the defective tire design that kills hundreds of SUV passengers, and you were the first person ever to notice the pattern, you probably won't be touched by discovery. As it probably should be, unless you were instrumental at designing the tire in question. But chat about it on an internal blog, or discussion, and you WILL be part of the discovery process. Further, it is fair to assume that you will be hurt more than any other party in the process, short of becoming a full whistle-blower. It WILL hurt you.
So, anything you write can and will come back to hunt you. Since this is quite a sneaky problem to solve (specially by proactive, forward-looking collab apps), it's sad to report that after the first incident (if everybody is lucky, a minor one, where the burn is not too bad) collaboration will start to slow down and cool-off as the news of the incident spread (I've seen it in a couple of occasions, and it usually doesn't take more than a couple of days). Forced to downgrade to minimal-trust conditions in our daily collaboration, we expose less and less of our knowledge to others, we avoid building consensus, and above everything, we avoid giving bad news. The fact is, we shut down not to become fuses for other peoples' (or company's) short-circuits.
A quick look to existing 'standards' for policies (even external facing ones, such as your colleague Charlene's, which address even simpler scenarios), shows that the agreements are almost without exception one-sided (13 sentences that start with "I will...", and none that starts with "The company will..." ). In any case, I have seen a couple of projects come to a frigid stop in terms of participation when this disparity in responsibility becomes apparent. Have you experienced this problem? Do your recommendations to companies address the need for a responsible stance for companies as well?
Thanks again for your highly educative contributions
@Dennis - You are totally right. My sense is that the compliance side of the coin will vary based on industry, but the AUP side of the coin should remain relatively constant.The two are interrelated, but can be approached individually.
Wait...am I saying it is possible to have a coin with one side bigger than the other? Maybe my coin metaphor has gone too far...
you have made many insightful comments in your reply. I'll address some -- briefly.
You are correct in observing that one bad event can dry up the collaboration environment very quickly. I have seen that happen in my experience too. For example: a poorly thought-out blog post from a colleague and his firing from a company within a few weeks was a signal to the other bloggers to mute for a while, and they did -- most stopped altogether. A good example makes a huge difference when it comes to behaviors that are outside the business norm. A success will launch reinforcements, a failure will ground the fleet. This mean that the role of internal collaboration/community-manager has to be a role given to a respected thought-leader, not an administrator. (or you need a team of two -- one to handle the administrative things, like wiki-gardening syntax cleanup, and one to monitor what is going on the wire from a "what are the implications of this" level)
Even more insightful is your awareness of the policy bias. Policies are typically written like warranties -- they protect the institution. Very Machiavellian. They don't provide safe havens. As a security professional once told me "Anything you say can be used against you, so why speak when you can nod?". But the balance of that is realizing that the underlying principal of the "corporation", it to limit liability for any one person. I'm not an expert in legal theory so I won't attempt to comment if these kinds of laws work well in terms of "corporate liability" vs. individual responsibility.
But I do care about the impact to collaborative business environments. We are here (very simplistically put) because companies are finding that silos limit their business growth. They need more than Theory X management to grow and compete. Companies are inspired by the use of the internet (and frankly by good old fashioned guilds, associations, support groups, etc.) to find new ways to connect people. If this endeavor is profitable, they will take the risk and reap the benefit. Risk is risk. But we measure it and abate it. Or we are restricted by it. The more we understand it, the better. Risk is fine. FUD is not.
So you are right -- a policy that totally protects the organization and exposes the participant will not encourage participation, it will only allow it.
We have had quite a bit of discussion about this in Oregon. We've come up with the following areas to be addressed in policy.
We are going to try and fit them into 3 categories: something like Personal Responsibility, Agency Responsiblity and System Responsiblity (or??)
We'll make as much use of existing Acceptable Use Policies by referring to them.
- Data Classification levels appropriate for GovSpace
- User profile completeness and accuracy of professional contact and related information
- Appropriate content
- Requesting eDiscovery information.
- Content: Owner & Liability
- Data Retention
- Complying with Agency Policy, Job Description & Manager
- Complying with Collaboration Policy
- Prohibiting Personal identity information
- Responsibility of sponsoring agency for non-state participants,
- Responsibility of specific agency for evaluating appropriate use by their personnel
- Representing an Agency: Follow the Agency Policy, Job Description & Manager Approval (discussed)
Have you seen good examples of internal policies that cover these areas?
Nate, I like your three simple rules. I'm looking for something that is along those lines - as 'user friendly' guidelines (since we'll have full legalese guidelines users have to click to accept to join). Since we're a global company I'm looking for something similar to what you have, but maybe more culturally acceptable for our enterprise.