Project Teams: Recognizing Failures, Shifting Focus

By William Ronco, Ph.D.

Q1.  Which root cause of bad decisions do project teams most typically neglect?

Q2.  What key source of problem solving errors do project teams most frequently ignore?

Q3.  What major factor in drug recalls and new product failures does the pharmaceutical industry most overlook?

A to all questions:  Group dynamics

Ineffective group dynamics significantly undermines science project team performance but largely goes unnoticed.  This post, part 1 of 4, describes the shifts in focus needed to address project teams failures.  Parts 2, 3 and 4 explore strategies, skills and teambuilding that improve group dynamics.

Group Dynamics:  Unnoticed Culprit

When most people make mistakes we take responsibility.  We own up to our computation errors, confess our inadequate analyses, repair our failures of judgment.  We go back, review, make adjustments and corrections, move forward.  Yet working hard to address our mistakes, we often overlook one of the most significant, recurring culprits undermining performance, productivity and achievement:  group dynamics.

When a promising project stalls, it’s often not because team members fail to conduct adequate analysis.  Rather, a vocal minority dominates the team’s discussions, crowding out introverted team members who possess key insights.  When a review board approves a flawed project, it’s often not because board members lack intelligence or objectivity.  Rather, board members’ quick agreement fogged their awareness of the need to look more closely at the data they possessed.

Though we may prefer to work alone, but we use groups extensively.  We use groups to engage the multiple, diverse disciplines necessary to grow a concept into a product.  We use groups to tackle projects, solve specific problems, explore opportunities.  We use groups to provide feedback, insight and perspective to individuals.  We use groups to review the work of other groups.  It’s only when individuals work in groups that concepts can move forward, develop and grow.  Yet our understanding of and skills with groups lag far behind our individual efforts.

Learning From The Airline Industry

Some industries already perform admirably addressing the significant problems group dynamics cause.  For more than a decade, the airline industry has required cockpit crews to participate in CRM (Crew Resource Management) training to reduce pilot / crew errors in problem solving and decision-making.  The industry initiated that training following intense scrutiny of post-crash black box recorded conversations documenting crew members’ failure to effectively express, respond to and intelligently engage crucial factual information.

Reviewing transcripts of those conversations conjures up both grim events and striking parallels to a wide range of other business situations.  Reflecting on the navigator’s too-subtle comment, “I think we’re losing altitude,” we hear echoes of a statistician too quietly reminding the project team that the sample size isn’t big enough.  Reading about the pilot’s ignoring the co-pilot’s observation that the ice on the wings seems extreme, we see the project leader not quite picking up the Quality professional’s contention that the group is interpreting the quality data incorrectly.

Other industries have applied the core principles and practices of Crew Resource Management to improve their own project and group communications shortcomings.  Maritime, firefighting and maintenance have made significant strides developing similar programs to improve group and project communications and performance.  It’s time all project teams, review boards, committees and groups followed suit.

4 Focus Shifts Clarify Group Problems

Industries that have made progress addressing group and team communications problems began by shifting focus in four ways:

  1. From individuals to group dynamics.  When groups make a mistake, we immediately demand to know “Who’s to blame?”  We focus on individuals – their words, emotions, motivations, intentions.  It’s also useful to focus on group dynamics.  Individuals participate in and influence groups but don’t, and can’t, control them.  Focusing on individuals, we miss the larger problems.  We neglect the recurring, predictable problems of group dynamics that create a dysfunctional environment that brings out every member’s worst thinking.
  2. From processes of conflict to processes of agreement.  When we initially look for group and team problems we focus on conflict, disagreement and discord.  Some groups do fail because they argue too much, and often, over things that don’t matter.  Often, however, groups make the most egregious errors in an atmosphere of camaraderie and harmony.  Psychologist Irving Janis’ studies of groupthink in the military’s tragically flawed 1960’s decision to invade Cuba’s Bay of Pigs show how groups’ relatively quick, efficient agreement masked their neglect of useful intelligence data.  A drive to reach consensus and be a “good team citizen” can make it extremely difficult for group members to surface important information.
  3. From leaders to people who don’t participate.  When we first debrief a group’s actions, we often focus on its leaders’ comments and sentiments; the dominant, prevailing views most people expressed.  We take these to be the views of all the group members.  We don’t realize that the silence of some group members reflects more their introversion than their agreement with the majority opinion. Often, it’s the people who did not participate who possess the key, valuable information that can lead the group to better solutions, more informed decisions.
  4. From what groups do to how they do it.  Assessing our groups, meetings, committees, review boards, task forces, departments, we usually focus on what they’ve done.  We want to know what they decided, concluded, reported.  Focusing on group process means that we also look into how they reached their decisions, formed their conclusions, wrote their reports.

Focusing on group dynamics asks us to improve our understanding of something that is intangible and often elusive.  It’s understandable and easy to focus on who’s to blame, who dominated and what a group decided.  It will take some effort to better understand the patterns of group communications, the sources of team members’ flawed agreements and roots of non-participation.  But it’s quite also possible, and completely necessary.

Next Steps

This post, part 1 of 4, describes the shifts in focus needed to address project teams’ failures.  Parts 2, 3 and 4 explore strategies, skills and teambuilding that improve project group dynamics.

This post originally appeared on Genetics Engineering and Biotechnology News’ Bioperspectives.

Director of the Biotech Leadership Institute William Ronco, Ph.D. (, provides training and consulting to improve leadership, communications and team performance in science organizations.

Posted in Uncategorized | Tagged , , | Leave a comment

Scientist 360 Survey Potentials, Problems, Best Practices

By William Ronco, Ph.D.

O would some power the gift to give us to see ourselves as others see us.  Robert Byrnes

Are the villagers carrying torches?
A scientist beginning to review her 360 results

360 Survey Uses, Potentials

We like to use 360 surveys because they consistently generate useful results. Scientists we work with in leadership training, teambuilding and strategic planning consistently use their 360 results to make meaningful gains improving their leadership and communications performance. Their survey results help them see more clearly how their communications with others advances – or obstructs – their science.

360 surveys provide the person being surveyed with survey data about their communications and leadership effectiveness from respondents. The surveys use the “360” label because they usually elicit data from a mix of people around the person being surveyed, people who manage, collaborate with, are clients of and/or are managed by the respondent.

Once thought radical, 360 surveys are now in widespread use in most industries.
Numerous vendors offer 360 survey packages online. Software like Survey Monkey makes it possible for amateurs to design their own surveys with minimal investments of time and effort. Some companies conduct 360 surveys on key managers and employees and use the results when making promotion and advancement decisions. Others enable survey recipients to control the process themselves, selecting respondents more for learning and development purposes.

Scientists in particular benefit from the 360 survey process. If done well, the surveys translate the important but difficult-to-describe competencies of “communications” and “working relationships” into data. Scientists can take the time to analyze the often-rich data in their survey results, develop and implement thoughtful action plans. Some scientists we’ve worked with have taken their 360 results data far beyond the percentage and mean scores the survey software produces, creating scatter diagrams and exploring standard deviations.

360 Survey Shortcomings, Dangers

Of course 360 results do not reflect of respondents’ opinions in ways that are perfectly accurate. Some respondents are wary of the surveys’ claim of confidentiality and so restrain their responses. Some use the surveys as a weapon, communicating long-held grudges. Others take the surveys as an opportunity to advance their own position, e.g., in open-ended comments like “Harry is the best boss I ever had, signed George.”

It’s also quite possible to for 360 surveys to cause significant damage. The whole premise of receiving “objective” survey feedback one’s communications effectiveness glosses over the gravity of the personal issues 360 surveys may address. Listening skills, showing respect, working well with others — the behaviors surveys describe can be difficult to think about and discuss, entangled in respondents’ ego and identity.

Some few companies have put 360 surveys to ill use, reassuring respondents and recipients that the process is confidential but then using the data later to influence advancement and layoff decisions. More often, the harm people cause with 360 surveys is unintentional, the result of careless planning, survey design or processes to absolutely ensure respondent confidentiality.

Getting The Most From Your 360 Survey: Six Tips For Planning And Launching

It takes some clear thinking and planning to avoid the problems and shortcomings, and get the significant results 360 surveys offer:

1. Clarify outcomes. It’s most useful to begin work with 360 surveys not by reviewing the packaged survey offerings online or drafting survey questions but by listing what kinds of information you want the survey to provide and clarifying how you want to use the survey. If the survey is being initiated not by you but by your organization and your influence on survey design is limited, it’s still very useful to clarify what you’d like the survey to do for you, what aspects of your communications you’d like it to spotlight and how you anticipate using the results.

2. Clarify survey ownership. Some companies retain 360 survey data in Human Resource files, others enable recipients themselves to receive the only copy of the data. Both approaches can be effective for different purposes as long as they are made clear and then scrupulously maintained. Changing the uses and circulation of survey data mid-stream is ineffective and unethical.

3. Clarify respondent confidentiality. Some 360 surveys don’t try to achieve respondent confidentiality, using the process more as an open reporting device. More often, 360 surveys strive to ensure respondent confidentiality to maximize the value of and validity of responses. Respondent confidence in your assurance of confidentiality will enhance the quality of their responses. Take the time to discuss your plans for ensuring confidentiality with several of your actual respondents to ensure that you’ve provided an effective strategy.

4. Consider the number of respondents. It usually takes a minimum of 8 respondents to both provide reasonable respondent confidentiality and generate enough responses to detect patterns and trends in your communications. It’s fine to increase the number of respondents to 15 or 20, but often difficult to analyze results beyond that.

5. Maximize response rate. A survey of 10 respondents that has a 100% response rate is much more valuable – and valid – than one of 100 respondents that has a 10% response rate. Work to increase your response rate by selecting respondents thoughtfully and encouraging them to be open and frank in their responses. Keeping your survey as short as possible maximizes response rates. It’s tempting and easy to add questions that are “interesting.” However, each question you add has the potential to reduce your response rate .

6. Link your desired outcomes with the survey questions. Pre-packaged surveys generate tidy, graphically impressive results and enable you to compare your responses with those of people in other organizations. However, many of the statisticians we’ve worked with have scoffed at the packaged survey statistics as misleading. They’ve challenged the practice some of these packages use in providing you with mean data scores, neglecting to show how the data points are arrayed. Also, science jobs seldom lend themselves to comparison across organizations because they usually evolve in ways that are shaped by specific, project-related tasks.

7. If you’re designing your own survey, take the time to do the online tutorials. Customizing your survey usually makes it more possible to link survey questions with your desired outcomes, but it does take a bit of effort. The neatness and agility of Survey Monkey and other online tools can mask the ambiguity of the questions you formulate. To ensure that your questions are addressing the items that interest you, pre-test them on a few people and discuss their thoughts.

8. Tune your questions. Organize your questions into 5 – 8 groupings of 5-6 questions per group. Use a 5-point Strongly Agree – Strongly Disagree scale for responses on closed ended questions. The difference between an “Agree” and “Strongly Agree” response is essential for respondents to express their sentiments and scientists who want to perform at the highest levels. Use mostly closed-ended 5-point scale questions, but also include several open-ended questions to capture whatever the closed-ended questions miss. Remind respondents that the survey software uses their open-ended comments as written, so they should be careful not to use catch phrases that may identify them.

9. Consider using 2-part questions. Two-part questions ask respondents not only to scale responses on 5 points from Strongly Agree – Disagree but also rank the Importance of the question on a 5-point scale from Very Important to Very Unimportant. This question form often points out to the person being surveyed that he/she is performing tasks effectively that respondents don’t think are important while neglecting tasks that respondents rate as high priorities.

10. Ask respondents to email you when they’ve completed the survey. That way you can protect their confidentiality while also keeping tabs on your response rate. If your response rate lags, ask all respondents again to complete the survey. Allow about a week for respondents to complete their efforts.

“I Know Who Wrote This”

Initially reviewing their 360 survey results, many people ignore the quantitative data,  turn to the open-ended comments, quickly find the one or two less-than-wonderful phrases and even more quickly conclude, “I know who wrote this.”  Many others comb quickly through the data, focusing, then obsessing about the several less-than-perfect response sets.

Both responses are understandable but not terribly productive strategies for getting the most from 360 survey results.  Finding patterns and trends in responses generates more useful insights than attempting to identify the source of a rogue response.  And the statisticians among our readers know very well what to do with the outliers in any data set.

Scientists react with less than scientific responses to their 360 survey data because the data addresses personal issues.   Despite our professional preferences, traces of emotion stubbornly remain.  However much we claim to – and really do – want objective feedback about our communications effectiveness, reviewing data describing our actual performance tests the limits of our Spockian ideals.

As a scientist you will also probably quibble with the validity of 360 data, and your quibbles are valid.  360 survey data is subjective, your sample size is small, respondents completed the survey under less than perfect lab conditions.  Still, the survey gives you a much fairly accurate, comprehensive and calibrated sense of others’ perceptions of your communications effectiveness.

Look Through The Johari Window

The Johari Window provides useful perspective for interpreting 360 results.  Named not, as it sounds, after a mystical Eastern philosophy but rather, for its inventors Joseph Luft and Harrington Ingam, the contingency table divides ways to understand one’s 360 responses:


Optimally, all of survey recipient’s reactions to his or her 360 data would fit in the “Public” block; 360 data recipients would be able to perfectly predict all their responses.

Working with thousands of scientists and their 360 data, we have not yet encountered anyone who perfectly predicted – or has been completely surprised – by his or her 360 responses.  A few peoples’ predictions closely approximate, and some broadly miss their actual results, but most are generally accurate.  Nearly all miss a few things.  They say the survey results are helpful in pointing out specific items they need to work on.

Some of the things people need to work on reside in the “Blind” block.  These are aspects of their communications that elicit reactions they were not aware of.  Scientists’ “Blind” block often includes others’ reactions to their efforts to criticize.  The scientist did not realize that the criticism of others’ work that they thought was “crisp” and “rigorous but fair,” was perceived by the recipients of the criticism as “devastating,” “withering,” “over the top,” or “dehumanizing.”

Other, different things people need to work on fit more accurately in the “Hidden” block.  These are aspects of their communications that they thought they had made clear, but that others apparently have not seen.  This occurs when scientists thought they were providing more than enough information about progress on key tasks, only to find that their 360 survey respondents want and need much more.

The distinction between “Blind” and “Hidden” responses is important because each requires a different action response.  Responding to “Blind” issues asks 360 survey recipients to generally withdraw, to pull back on behaviors that irritate, annoy or anger others.  Responding to “Withheld” issues, on the other hand, requires 360 survey recipients to generally step up, push forward, communicate more about matters that interest, involve or impact others.

6 Tips Help You Get The Most From Your 360 Results

1.  Focus first on the quantitative responses, then look at the open-ended comments.  The quantitative data is less colorful but much more useful to enable you to identify patterns and themes in your communications effectiveness.  Once you’ve got a grasp on your overall response profile, move on to the open-ended comments to provide more “color” to help you understand the quantitative shades of grey.  Try not to dwell on any open-ended comment, no matter how interesting or upsetting it may be.  Remember, the people writing these comments often don’t intend them to reflect their best thinking.

2.  Begin with the “Overall” question.  Most people start reviewing their data by looking at the responses to the first question on the survey, then moving on to each question in sequence.   It’s more useful to skip to the “overall,” summary questions that usually conclude the results because they provide an overall context for all the other questions  It’s also useful to prioritize the questions, focusing most on those that address issues most important to your effectiveness overall.

3.  Aim for a majority of “5’s on a 5-point scale”on the key questions.  If you’re wondering what to aim for, what’s considered “Excellent” 360 responses, aim for a majority of “5’s” on a 5-point scale.  4’s are fine, and 3’s may also be all right.  However, it’s worth aiming for 5’s, especially on the questions that address issues most important to your job success for the same reasons you aim for excellence in your science work.  Yes, some respondents just don’t give 5’s for anything and yes, you can’t please everyone.  But the 360 is about communicating effectively, not pleasing people.

4.  Convert your insights to actions.  You’re most likely to make improvements and convert your 360 survey insights into actions if you:

  • Keep the data in front of you. Some people tape print out to their monitor, or keep a view of it open on their browser
  • Schedule times when you specifically address the survey insights

5.  Discuss your survey results with respondents.  Of course you shouldn’t ever ask respondents how they themselves responded to your survey.  However, discussing your reactions, insights and action plans with respondents usually elicits positive responses from them. it shows them you’re taking their responses seriously.  Also – as long as you don’t attempt to defend or explain yourself – discussing your results with respondents usually generates additional insight that’s very useful for you.

6.  Plan to re-survey in 6 – 12 months.  Circumstances change, and you’ve been working on addressing the issues that came up in that survey.  Getting into the habit of doing a 360 annually enables you to benchmark and chart your progress in leadership and communications effectiveness the same ways you do for the technical aspects of your science.

This post was originally published on Genetics Engineering and Biotechnology News as a 2-part post.  See: and

Director of the Biotech Leadership Institute William Ronco, Ph.D. (, designs and uses 360 surveys in his work with hundreds of scientists every year.  Dr. Ronco consults on leadership, communications, team, and partnering performance in pharmaceutical, biotech, and science organizations.

Posted in Uncategorized | Leave a comment

Redesign Your Science Job: Increase Satisfaction, Meaning, Performance

By William Ronco, Ph.D.

“New responsibility creep” can drain the satisfaction science jobs provide. We describe strategies and tools to redesign your job to reflect your interests and goals.


Science Jobs As Houses With Additions?

The house may look good, but with those additions, what’s it like inside? Is getting from bedroom to kitchen a tortured journey through twisting hallways, up random stairs, around imposing walls? Does the house’s design enhance its occupants’ quality of life?

Science jobs resemble houses with additions. They begin with simple structures. Then, as houses respond to new needs with additions, science jobs respond to “new responsibility creep” with new tasks. Like the evolved house, the evolved job may look good on the outside but be problematic inside. Its new tasks may dilute the scientist’s core interests, diverge from the scientist’s purpose and goals.

The ebb and flow of projects in science organizations makes it feasible for most scientists to redesign their job to better meet their needs. The payoff for redesign can be significant, like the difference between microwaving a packaged dinner and making a recipe from scratch. Cooking the meal takes a bit more effort but yields a more personalized, satisfying dining experience.

Three Job Redesign Tasks

Three redesign tasks keep science jobs as satisfying as they have the potential to be:
1. Clarifying the Foundations of the job – passions, purpose, aptitudes and pay – provides a basis for taking action to make job changes.
2. Using Tools – formal roles, informal roles, studies and mentors – builds the Foundations into the job.
3. Completing a 1-page Goal-Based Work Plan builds on the foundations and uses the tools to create a comprehensive, prioritized roadmap for action.


Answering these Foundations questions provides the core for redesigning jobs:

What Passions generate provide the most significant sources of motivation and energy? For many scientists, solving complex problems, disproving erroneous theories or doing complicated calculations spark very high levels of job satisfaction.

As “new responsibility creep” grows, it’s easy to lose sight of one’s passions. A 30- minute exercise restores much clarity. In the first 15 minutes, one lists without analysis as many job tasks as one can that generate any spark of motivation. In the second 15 minutes, one rigorously matches each item on the list against all the others, progressing to a small (3–5) number of the most important passions. (Readers interested in more details for this exercise will find it at the website )


Like passions, Purpose also motivates many scientists and can also fall prey to “new responsibility creep.” Purpose can be overall life purpose, e.g., “I am personally committed to curing Disease X within the next 10 years.” More often, purpose encompasses several smaller scale issues, e.g. “I believe clinical pharmacologists should be more involved in research design,” or “I think Discovery and Phase 1 research should be less encumbered by expectations for quick pay-offs.”

Whatever the scale of one’s purpose may be, it’s essential to make sure one’s job honors, and possibly actively expresses it. (Readers interested in more details about clarifying purpose may find it useful to write a “This I Believe” piece, see


Though some career counselors advocate building jobs primarily on one’s aptitudes, i.e., natural abilities, scientists know that this approach can create problems as well as solutions. It’s easy to confuse ability with enjoyment – just because one finds it easy to write poetry doesn’t necessarily mean that one enjoys either the process of writing or the completed poem. Also, following one’s aptitudes can lead scientists to be pigeonholed doing tasks that generate little passion or purpose.


As long as scientists are not independently wealthy, it’s also important to include aspects of pay – salary, benefits, upper limits, career paths, etc. — as an essential aspect of job redesign. Thankfully, the internet makes data on pay available to all in a way that makes it possible to deal realistically with the financial aspects of many science jobs. Regrettably, some scientists fail to do the due diligence with this data, and experience surprise and disappointment when they discover that the job of their dreams imposes financial limits on the life they want to lead.

Project Flow and Tools

The ebb and flow of projects naturally beginning, developing and concluding in science organizations makes it easy to redesign jobs, continuously bringing new opportunities for scientists to alter formal and informal roles, engage in studies and work with mentors:
• Formal roles, tasks and responsibilities that become part of a scientist’s job provide one excellent job redesign tool. Taking on a formal new job responsibility gives a scientist license to connect more effectively with one’s passions, purpose, pay and aptitudes.
• Informal roles, tasks and responsibilities provide an additional, often overlooked job redesign tool. A scientist who wants to re-connect with her passion to perfect Bayesian analysis can often arrange, beyond her formal job responsibilities, to work more informally in advisory roles with specific projects.
• Independent studies, whether done as part of one’s job or on a scientist’s “own time” enable scientists to work in areas that tap into their passions and purpose and benefit their organizations their organizations as well. Independent studies can enable scientists to expand their existing passions and principles as well as explore new areas that intrigue them, e.g., how other departments in their own organization work.
• Mentors, people with extensive expertise in specific subjects, abound in science organizations. Most are open to having discussions with other scientists who want to learn from them and explore new applications of their knowledge.

Using the Goal-Based Work Plan

The Goal-Based Work Plan builds on the foundations (passions, purpose, aptitudes and pay) and uses the tools (formal and informal roles, independent studies and mentors) introduced in the previous blog post to produce a brief (1-page) yet comprehensive document. Completing the document increases scientists’ clarity about their passions, purposes, interests and goals. The completed document provides a useful tool for scientists to plan and negotiate job changes with management.

The Goal-Based Work Plan creates the structure useful for action planning. It includes:
• A Tasks column that provides the location for much job redesign. Listing the 8 – 10 major tasks the job includes, it’s where scientists can add formal and informal roles to address their passion, purpose, aptitudes and pay. Each task begins with a verb, e.g., in science, Research, Study, Analyze, Write, Measure, Assess, Decide. Half the tasks are quantitative, analytical, the other half involve communications, e.g., Communicate with team members, Partner with users, Manage project team members, Mentor new hires, etc.
• An Outcomes column that further advances job redesign, asking the scientist to articulate, as specifically as possible, what the expectations for achievement are for each task. In the column, it’s useful to articulate several quantifiable outcomes that doing the task is expected to produce in a 3 month time period. The ASMART acronym helps generate outcomes that are clear – (Agreed on, Specific, Measurable, Results-Oriented, Time-Bound.)
• A Priorities column that often generates considerable thinking. It asks the scientist to divide his/her overall 100% effort into the task categories. The numbers should reflect what’s most important, not how long tasks take.

Completing this form, most people list the tasks quickly and easily, but have to think more about the outcomes (and especially the outcomes for the communications tasks) and struggle most to clarify priorities allocating the 100% among all the tasks.

To illustrate how the form advances job redesign, we illustrate two versions of it for James, a statistician working in mid-sized biotech company:
• Figure 1. First draft
• Figure 2. Second draft showing both increased detail overall and reflecting the scientist’s passions, purpose, aptitudes and goals

Following is James’ first draft of his job description:


James’ Reflections and Job Redesign

James liked the clarity his first draft Goal-Based Work Plan provided in describing what he thought his job really was. Reflecting on his passions, purpose, aptitudes and pay, the draft helped him understand his frustration with his job. He revised the first draft:
• Noting that the outline failed to address his passions for learning new statistical methods and the discovery aspects of science, he added a detail to his Estimate Sample Size task to include piloting new methods. Addressing his passion for scientific discovery, James refined his role in Analyze Research Designs to sanction greater involvement in the discovery process. He also thought he’d reach out to ask Emma, a senior statistician to mentor him. She seemed to have done an excellent job keeping herself up to the minute with evolving statistical methods.
• Seeing that the outline neglected his purpose to increase the visibility and role of statistics in the research process, James reformulated his role in his Analyze Research Design tasks to be more proactive. Instead of waiting to be called to provide input, he began to reach out to the scientists working in early phases of research so that he could be included in the earliest possible discovery discussions. Along similar lines, he enlarged his role Participating with the Statistics group to work on developing the company-wide statistics training his managers had asked for some time ago.
• Tapping into emerging aptitudes for leading groups James noticed in his activities on town civic committees, he thought he’d try taking a more active role in the statistics group, perhaps co-facilitating it instead of just participating as a member. At the same time, he thought it would be useful to conduct a brief study of the way similar groups function in other companies. He realized he could find considerable useful information efficiently through his contacts in the statistician’s professional association he belonged to.
• James reflected on his task to Work With New Statisticians in the recent merger of his company with a smaller start-up. He realized that taking a more active role partnering with that group would enable him to address both his passion to enlarge the impact and role of statistics in the company and his aptitude for group work.
• As James reviewed the Priority column, he reduced his effort for estimating sample sizes and writing reports, tasks that interested him less and should be easier to accomplish with his experience. He increased the priorities for analysis, communications and partnering work implied by his new roles Estimating Sample Sizes, Analyzing Research Designs and Co-Facilitating the statistician’s group.
• Finally, when James checked the pay data he found on the internet, he noted that making the job redesign changes he was considering would enlarge the statistics job in ways that justified pay increases beyond the levels he had reached.


Pleased and energized by his revisions, James discussed his Revised Draft with his lab’s projects, statistics and human resource managers. Each had questions about, but overall positive reactions to the plan. Over the course of a few weeks, James was able to implement most of the plan, and see the results in his everyday work.

This post originally appeared on Genetics Engineering and Biotechnology News as a 2-part series, see and

William Ronco, Ph.D. is interested in hearing about readers’ thoughts about and experience with this post.   Dr. Ronco works with scientists redesigning their jobs in his scientist leadership training, teambuilding and strategic planning, see

Posted in Uncategorized | Leave a comment

Improving Science Partnering: Succeeding After The Deal

By William Ronco, Ph.D.

Effective partnering is essential for success in science mergers, acquisitions, alliances and between departments, but partnering results often fall short. Managing partnering as if it were a project and better partnering meetings significantly improve partnering results.

“We just got acquired by a giant pharma company. They say they’re going to leave us alone. They just want to partner with us to learn about how we model our cells.”
“We’re partnering also, with a company like ours that focuses on Phase 2 arthritis drug development. I can’t imagine how we’ll keep track of who owns what.”
“And I can’t imagine how cross-company partnering could work. We can’t partner in our own company with the other departments that are supposed to work with us.”

In 2013, working in biotech and pharma means working with partnering. It’s not possible to read an industry publication or view a biotech association web site without encountering news about the latest deals, the most recent merger and alliance agreements.

We partner across companies because few if any large companies attempt to do Discovery or Phase 1 work. We partner in alliances because we hope / expect that pooling our efforts will benefit us all. We partner between departments in our own organizations because our organizations are flat, decentralized. They lack the kind of formal hierarchy that works well for banks and insurance companies but would stifle the free-flowing communications and thinking necessary for drug development.

Partnering Problems

Our focus on partnering deals masks the problems we encounter when we attempt to translate the deals into everyday work. Two thirds of general business mergers and acquisitions fail, the large literature on the subject tells us. We lack similar accessible data on our own industry, and on alliance and inter-departmental partnering performance because of our stringent intellectual property practices. However, our everyday experience strongly suggests that our alliances and inter-departmental partnering encounter similar difficulties.

We encounter a host of puzzling, troubling, partnering problems. The same large company that initially told us they would “leave us alone” now wants to make “minor” changes to our procedures that we know will have major negative impacts. No one in the large organization can tell us who can answer our questions. People we haven’t met before appear at our labs or engage us in conference calls where they demand to know why they haven’t received information we didn’t know they needed.

Most troubling, and often most pointed, are the conflicts we encounter among departments attempting to partner in our own organization. We appreciate that quality standards must be high, but must our QA auditors come after us with such a vengeance? Do they really need everything they’re asking for? Do they really expect us to drop everything we’re doing just to meet their artificial deadlines and emergencies?

Perfect Your Partnering Meetings

Though daunting, these problems are understandable. People can’t tell us who’s responsible in our partner organizations because they don’t know. In fact, we probably can’t answer such questions about our own organization. In science, all of our organizations need that kind of ambiguity in order to keep the fragile process of inquiry alive.

Effective partnering meetings provide a forum to manage the partnering process and increase partnering success. Meetings should be scheduled closely enough together to address issues and track action items, but separated enough so participants can implement plans. Meetings must involve the key players, usually a vertical cross-section in each organization – senior managers to give general direction, mid-levels to do the work, and grass roots people to provide a reality check and \hands-on insights.

Because partnering meetings build an infrastructure engaging two groups, they need an agenda that works intelligently on four tasks:
1. Taking Stock. The whole group devises and tracks measures that accurately, objectively assess the strengths, weaknesses and opportunities of partnering performance.
2. Building Trust. The personality profile tools our HR departments use for leadership training — Myers Briggs, DISC, Personalysis, FIRO-B, etc. — all provide valuable insight to help people build the trust needed to bring organizational agreements to life.
3. Clarifying Goals. Disagreements about partnering practices often trace back to differing interpretations of partnering goals. Every partnering group needs to discuss, write, and sign off on a set of 6 -10 performance and communications goals they agree to aim for in order to make the partnering successful.
4. Implementing processes. Partnering groups ask what 3 – 4 key processes must work smoothly in order to achieve their partnering goals. They usually plan and implement processes for responding to changes, resolving conflicts and ensuring that all members of the group give and receive necessary information when and in the appropriate form required.

One word provides the criterion for assessing partnering meeting effectiveness: specificity. It’s nice if meeting discussions are interesting and participants are polite. It’s most likely for meetings to have lasting results if discussions are specific and result in detailed lists of who will do what when.

Manage Partnering As If It Were A Project

Initially encountering these ideas, many people express surprise that such extensive effort is necessary to ensure partnering success. When they compare the effort they’ve given their own, troubled partnering, they realize that they’ve severely neglected it, expecting it to somehow unfold on its own.

It helps to thing of partnering as if it were a project. Mergers, acquisitions and alliances may confuse us, but we know projects. In science, we live, breathe and succeed with projects every day. Managing partnering like a project means doing the same things with partnering that we do to make projects successful: assigning leaders, clarifying individuals roles and responsibilities, setting clear deliverables and schedules, establishing effective processes for exchanging information and solving problems.

Our partnering activities bring so many promising ideas and people together. It’s time we learned to make the most of them.

This post originally appeared in Genetics Engineering and Biotechnology News, see:

William Ronco, Ph.D. consults extensively on improving partnering success. His web site and book, The Partnering Solution (Career Press, 2005) provide detailed descriptions of partnering method and a wide range of real case examples, including several from pharma and biotech. Dr. Ronco is interested in hearing about readers’ efforts, successful or not, to improve their own partnering situations at

Posted in Uncategorized | Leave a comment

Increasing Science Strategic Planning Success

By William Ronco, Ph.D.

Effective strategic planning can address the most important issues science organizations face.  Not only senior executives, but project and group leaders also benefit significantly from using strategic planning tools.

“Our executives have started their annual strategic planning sessions.  This involves sitting in a room with inadequate data until an illusion of knowledge is attained.  Then we’ll reorganize because that’s all we know how to do.”  Dilbert’s boss explains strategic planning

Why should science organizations do strategic planning?  Conventional businesses often struggle with it, and their products and services are much easier to understand and predict than the compounds in development in a lab.  Science organizations understandably overlook strategic planning because their focus on their science absorbs the lion’s share of their interest.  “Of course we have a strategic plan,” a biotech start-up CEO told me.  “It’s – get the experiment to succeed.”

Yet it’s important for science organizations to do strategic planning because it addresses the most crucial organizational issues that enable – or obstruct – science.  Effective strategic planning involves three kinds of work scientists do well – analysis, innovation and thoughtful action.  Analysis assesses the key data that describes the organization’s strengths, weaknesses, opportunities and threats, often with the shorthand SWOT.  Innovation comes into play in clarifying the organization’s Vision, Mission, Values and Goals.  Thoughtful action closes the gaps between the SWOT and Vision with several initiatives that address problems and /or explore opportunities.

Strategic planning can help science not only in organization-wide issues but also in a wide range of applications.  For project managers, team leaders and department supervisors, strategic planning provides a handy tool to increase individual focus on group goals and improve the complex communications essential for translating individual efforts into organizational results.

Analysis, Vision, Initiatives:  Strategic Planning Specifics

Apple executive Alan Kay notes, “The best way to predict the future is to invent it.”

At its best, strategic planning links the three tasks to create a process of continuous improvement, evolution and organizational learning.

Analysis answers the question, “How are we doing?”  Done ineffectively, it involves limited or irrelevant data or worse, no data at all.  The concept of “the balanced scorecard” helps leaders remember that effective Analysis includes data on a balanced scope of organizational issues.  In science organizations, this usually includes not only progress on the development of concepts but also employee turnover, cash flow and communications in and across all levels of the organizations

Vision addresses the question, “What do we want?” with several kinds of responses:  a Vision describing a bold accomplishment the organization would like to achieve in 3 – 5 years, a Mission describing what the organization really does, Values clarifying the principles the organization intends to follow and Goals specifying quantifiable business measures it aims to accomplish in a 6 – 12 month time period.  Reaching consensus on these questions may take some time, but it pays off significantly by increasing alignment between employees’ individual efforts and the organization’s focus.

Initiatives map actions the organization plans to take within a 3 – 12 month time period to close the gaps between Analysis and Vision.  Typical science project team initiatives include tuning up communications processes and hand-offs among project team members, clarifying each member’s project roles and responsibilities, and drawing a project organization chart that clarifies team members’ working relationships.

Five Principles Increase Science Strategic Planning Success

Five principles increase science organizations’ success with strategic planning:

1.  Do strategic planning both organization-wide and for projects, teams and departments.  Senior executives’ plan for the whole organization provides valuable focus and direction for all employees, explores conceptual possibilities that might be neglected and draws attention to organizational housekeeping that supports better science.  Project, team and department leaders’ planning with their constituencies brings focus to the organizational issues that impact the groups.

2.  Directly involve people who will carry out the plan and key stakeholders as well. Including a wider range of people in planning makes it more possible for them to “own” action goals and initiatives the planning group formulates.  Involving them increases their engagement with the group and taps into the grassroots insights they bring.

3.  Do all three parts of strategic planning – Analysis, Vision and Initiatives.  It’s the connection and interplay between analysis, vision and initiatives that makes strategic planning most effective.  When strategic planning is ineffective, it’s often because the organization has worked with only one or two of these tasks.

4.  Give specific people clear deliverables for carrying out the Initiatives.  Effective strategic planning doesn’t end with plans for the organization, it clearly identifies who is responsible for carrying out the initiatives, and describes the initiatives’ deliverables and schedules.

5.  Follow Up.  Getting the most from strategic planning involves making it a living document, following up on action items quickly and updating the plan regularly.

This post originally appeared on Genetics Engineering and Biotechnology News, see:

By William Ronco, Ph.D.

I’m interested in your experience with and ideas about this.  Contact me at   Our web site ( provides links for White Papers that may be useful for you and describes our consulting and training.   

Posted in Uncategorized | Tagged , , | Leave a comment

Science Performance Appraisal Problems, Best Practices, Tips

By William Ronco, Ph.D.  Reprinted from our article in Genetics Engineering and Biotechnology News, see  

“Mine was a waste of time.  My boss did all the talking and there wasn’t even much of that.  It took him twenty minutes to cover a year of my work.”

“Mine was a joke.  The forms my company uses don’t have anything to do with the science I’m actually working on.”

“I don’t know about forms.  What I do know is that what they pay me doesn’t connect with what they tell me.”

“My company’s forms aren’t so bad.  What bothers me is we never get to the part about my professional development.” 

Listen to a group of scientists in December and January and you’re likely to hear these and many other complaints about performance appraisal dysfunction.  Many science organizations do performance appraisals at year-end.  Few do them well.

It’s understandable that science organizations struggle with performance appraisals.  How is it possible to assess a scientist’s performance when positive experimental results may reflect sloppy lab work more than creativity or clear thinking?  And how is it possible for scientists to overlook the inherent absurdity of the performance appraisal bureaucratic process?  The very thought of using cumbersome, corporate, jargon-laden forms to assess a scientist’s performance in a 60 minute discussion can’t help but invite healthy skepticism if not outright rebellion.

Despite the struggles, it’s especially important to get science performance appraisals right.  Done well, appraisals have great potential to motivate, focus and help scientists pursue professional development in the areas that interest them most.  For science organizations, effective performance appraisals help retain and develop key staff, build a positive organization culture and advance science for all.

From Performance Appraisals To Performance Management:  Quarterly?

The most important best practice we’ve seen is a quarterly, rather than the traditional annual approach.  For people who struggle with cumbersome annual appraisals, the thought of repeating the process four times may initially seem absurd.  However, shifting to a quarterly approach makes the discussions more current, focused on real job activities.  Quarterly discussions require much less paperwork than annuals, and thus often take less time overall than annual processes.  Most important, quarterly discussions trigger a shift from performance appraisal to performance management. Both the manager and the employee can take a more active, positive role in jointly managing the employee’s performance and development.

Beyond a quarterly approach, other key best practices include:

  • Discussions are scheduled throughout the year on the employee’s anniversary of hiring, not batched in a brief (typically year-end) time period.
  • Company forms accurately describe employee jobs and performance criteria.
  • Discussion devotes equal time to 3 topics – performance assessment, updating job priorities and planning meaningful training / professional development
  • Performance assessment is objective, fair and balanced
  • Job priorities and expectations are clear and updated for the next 3 months
  • Pay reflects employee performance in some way and aligns with organization goals
  • Senior leaders themselves give high quality discussions

Seven Practical Tips For Both Managers and Employees

Whatever your organization does, it’s quite possible for scientists giving or getting performance appraisals to take several kinds action to help you get the most from the process:

1.  Both manager and employee draft the forms.  Having both people work with the forms makes discussions more productive.

2.  Exchange drafts of the forms before meeting.  Knowing more about what the other person is thinking before the meeting helps make discussion less reactive, more thoughtful.

3.  Relate the forms to the job.  If the organization’s paperwork doesn’t make the connections between what the person actually does and what they’re being assessed on, work to fill in the gaps and make the connections in the discussion.

4.  Work to make assessments fair.  People struggle to be accurate, neither too negative nor too positive when assessing performance.  Working to make assessments fair is well worth the effort.

5.  Discuss the question, “What do you want to get better at?”  Many performance appraisal forms include useful questions about employees’ long-term goals.  Meaningful professional development for most scientists also involves discussing new competencies they want to develop, ideas they want to explore, papers they want to write.

6.  Develop clear action steps.  The person being appraised should leave the discussion knowing as clearly as possible what the job priorities and expectations are for the next 3 months, as well as with some specific plans for meaningful professional development.  The person giving the appraisal is likely to leave the discussion with some actions needed to support the employee’s professional development.

7.  Schedule a check-in in three months.  Whether or not your organization does quarterly discussions, scheduling your own quarterly discussions help keep action plans on track, address changes and make professional development more meaningful.

I’m interested in your experience with and ideas about this.  Contact me at   Our web site ( provides links for White Papers that may be useful for you and describes our consulting and training.  

Posted in Uncategorized | 2 Comments

Re-Thinking, Improving Leadership and Communications

Why did the promising new drug fail in clinical trials?  Why did our project run over budget and schedule?  When did we lose that key engineer?

Often, the biggest challenges we face are not scientific, professional or technical but organizational.  In order to bring it to life, great individual performance requires great leadership, communications and organization.

This blog provides crucial insight and current best practice information into the most pressing leadership, communications and organizational challenges I believe we face face.

I’ve consulted on leadership, communications, team and organizationl performance in a wide range of organizations for over 25 years.  I’ve been lucky to have excellent clients, people and organizations that have worked with intelligence and persistence to address the serious organizational challenges they face.  I’ve learned a great deal from them.

I offer this blog to share my observations and insights and to continue to improve science organizations and leadership.

I’m interested in your experience with and ideas.  Contact me at   Our web site ( provides links for White Papers that may be useful for you and describes our consulting and training.  

Posted in Uncategorized | Tagged , , | Leave a comment