The fundamental concept of business-to-business CRM is frequently referred to as allowing the bigger business to be as responsive to the requirements of its customer as a small business. In the past of CRM this became translated from “responsive” to “reactive”. Successful larger businesses recognise that they have to be pro-active in locating [paying attention to] the views, concerns, needs and levels of satisfaction from their customers. Paper-based surveys, including those left in hotel bedrooms, generally have a low response rate and are usually completed by customers that have a grievance. Telephone-based interviews tend to be affected by the Cassandra phenomenon. Face-to-face interviews are expensive and can be led by the interviewer.
A sizable, international hotel chain desired to have more business travellers. They chose to conduct a client satisfaction survey to learn the things they necessary to enhance their services for this type of guest. A written survey was put into each room and guests were asked to fill it out. However, if the survey period was complete, the hotel discovered that the sole those who had filled in the surveys were children along with their grandparents!
A sizable manufacturing company conducted the initial year of the items was made to get Customer satisfaction survey. The first year, the satisfaction score was 94%. The next year, with the same basic survey topics, but using another survey vendor, the satisfaction score dropped to 64%. Ironically, concurrently, their overall revenues doubled!
The questions were simpler and phrased differently. The order from the questions was different. The format from the survey was different. The targeted respondents were with a different management level. The Overall Satisfaction question was placed at the end of the survey.
Although all customer satisfaction surveys are used for gathering peoples’ opinions, survey designs vary dramatically in size, content and format. Analysis techniques may utilize a wide variety of charts, graphs and narrative interpretations. Companies often utilize a survey to test their business strategies, and several base their business plan upon their survey’s results. BUT…troubling questions often emerge.
Are the results always accurate? …Sometimes accurate? …In any way accurate? Are available “hidden pockets of customer discontent” which a survey overlooks? Can the survey information be trusted enough to adopt major action with confidence?
Since the examples above show, different survey designs, methodologies and population characteristics will dramatically modify the results of a survey. Therefore, it behoves an organization to help make absolutely sure that their survey process is accurate enough to generate a true representation of their customers’ opinions. Failing to do so, there is absolutely no way the business can use the final results for precise action planning.
The characteristics of any survey’s design, and the data collection methodologies employed to conduct the survey, require careful forethought to ensure comprehensive, accurate, and correct results. The discussion on the next page summarizes several key “rules of thumb” that must definitely be adhered to when a survey is to become company’s most valued strategic business tool.
Survey questions needs to be categorized into three types: Overall Satisfaction question – “How satisfied have you been overall with XYZ Company?” Key Attributes – satisfaction with key parts of business, e.g. Sales, Marketing, Operations, etc. Drill Down – satisfaction with issues that are unique to each and every attribute, and upon which action could be taken to directly remedy that Key Attribute’s issues.
The Entire Satisfaction question for you is placed after the survey to ensure that its answer will likely be impacted by a more thorough thinking, allowing respondents to possess first considered solutions to other questions. Market research, if constructed properly, will yield a wealth of information. These elements of design ought to be taken into consideration: First, the survey should be kept to some reasonable length. Over 60 questions in a written survey will end up tiring. Anything over 8-12 questions begins taxing mdycyz patience of participants in a phone survey.
Second, the questions should utilize simple sentences with short words. Third, questions should demand an opinion on just one single topic at the same time. As an example, the question, “how satisfied have you been with the services and products?” should not be effectively answered just because a respondent might have conflicting opinions on products versus services.
Fourth, superlatives such as “excellent” or “very” really should not be used in questions. Such words tend to lead a respondent toward an opinion.
Fifth, “feel happy” questions yield subjective answers which little specific action can be taken. For example, the question “how will you feel about XYZ company’s industry position?” produces responses that are of no practical value in terms of improving an operation.
Though the fill-in-the-dots format is one of the most frequent kinds of survey, you will find significant flaws, which could discredit the results. As an example, all prior answers are visible, which leads to comparisons with current questions, undermining candour. Second, some respondents subconsciously tend to find symmetry inside their responses and become guided through the pattern of the responses, not their true feelings. Third, because paper surveys are generally categorized into topic sections, a respondent is more apt to fill down a column of dots inside a category while giving little consideration to each and every question. Some INTERNET surveys, constructed within the same “dots” format, often result in the same tendencies, specifically if inconvenient sideways scrolling is essential to answer a question.
In a survey conducted by Xerox Corporation, over 1 / 3rd of responses were discarded because the participants had clearly run on the columns in each category instead of carefully considering each question.
TELEPHONE SURVEYS Though a telephone survey yields a more accurate response compared to a paper survey, they might also provide inherent flaws that impede quality results, like:
First, when a respondent’s identity is clearly known, concern over the possibility of being challenged or confronted with negative responses at a later time produces a strong positive bias in their replies (the so-called “Cassandra Phenomenon”.)
Second, research indicates that people become friendlier being a conversation grows longer, thus influencing question responses.
Third, human nature says that people want to be liked. Therefore, gender biases, accents, perceived intelligence, or compassion all influence responses. Similarly, senior management egos often emerge when trying to convey their wisdom.
Fourth, telephone surveys are intrusive on the senior manager’s time. An unannounced call may create a primary negative impression of the survey. Many respondents might be partially focused on the clock instead of the questions. Optimum responses are depending on a respondents’ clear mind and leisure time, two things that senior management often lacks. In a recent multi-national survey where targeted respondents were offered the choice of a telephone or some other methods, ALL chose the other methods.
Taking precautionary steps, such as keeping the survey brief and using only highly-trained callers who minimize idle conversation, can help minimize the aforementioned issues, and definitely will not eliminate them.
The objective of a survey would be to capture an agent cross-section of opinions throughout a team of people. Unfortunately, unless most of the folks participate, two factors will influence the results:
First, negative people often answer a survey more often than positive because human nature encourages “venting” negative emotions. A low response rate will generally produce more negative results (see drawing).
Second, a lesser percentage of a population is less associated with the complete. For example, if 12 individuals are asked to have a survey and 25% respond, then the opinions of the other nine folks are unknown and could be entirely different. However, if 75% respond, then only three opinions are unknown. The other nine will be more very likely to represent the opinions of the whole group. You can think that the higher the response rate, the better accurate the snap-shot of opinions.
Totally Satisfied vs. Very Satisfied ……Debates have raged within the scales utilized to depict degrees of customer satisfaction. In recent years, however, research has definitively proven that the “totally satisfied” customer is between 3 and 10 times more prone to initiate a repurchase, which measuring this “top-box” category is quite a bit more precise than every other means. Moreover, surveys which measure percentages of “totally satisfied” customers as opposed to the traditional sum of “very satisfied” and “somewhat satisfied,” provide an infinitely more accurate indicator of business growth.
Other Scale issues…..There are many rules of thumb that are often used to ensure more valuable results:
Many surveys provide a “neutral” choice over a five-point scale for people who may not want to answer a question, or for people who are unable to create a decision. This “bail-out” option decreases the quantity of opinions, thus diminishing the survey’s validity. Surveys designed to use “insufficient information,” being a more definitive middle-box choice persuade a respondent to create a decision, unless they merely have too little knowledge to answer the question.
Scales of 1-10 (or 1-100%) are perceived differently between age groups. Individuals who were schooled using a percentage grading system often think about a 59% to get “flunking.” These deep-rooted tendencies often skew different peoples’ perceptions of survey results.
There are several additional details that can enhance the overall polish of a survey. While market research should be a workout in communications excellence, the experience of taking a survey should also be positive for that respondent, along with valuable for that survey sponsor.
First, People – Those in charge of acting upon issues revealed in the survey should be fully engaged in the survey development process. A “team leader” should be accountable for making certain all pertinent business categories are included (as much as 10 is perfect), and that designated individuals be responsible for responding to the results for each Key Attribute.
Second, Respondent Validation – When the names of potential survey respondents have been selected, they may be individually called and “invited” to participate in. This method ensures the person is willing to take the survey, and elicits an agreement to do this, thus improving the response rate. Additionally, it ensures the person’s name, title, and address are correct, a location by which inaccuracies are commonplace.
Third, Questions – Open-ended questions are usually best avoided in favour of simple, concise, one subject questions. The questions should also be randomised, mixing the topics, forcing the respondent to be continually thinking of an alternative subject, and not building upon a response through the previous question. Finally, questions needs to be presented in positive tones, which not just helps maintain an objective and uniform attitude while answering the survey questions, but enables uniform interpretation from the results.
Fourth, Results – Each respondent receives a synopsis in the survey results, in a choice of writing or – preferably – personally. By offering on the outset to talk about the results in the survey with each respondent, interest is generated in the process, the response rate increases, and the clients are left with a standing invitation to return towards the customer later and close the communication loop. Besides that provide a method of dealing and exploring identified issues over a personal level, however it often increases an individual’s willingness to sign up in later surveys.
A highly structured customer care survey can offer an abundance of invaluable market intelligence that human nature is not going to otherwise allow usage of. Properly done, it could be a means of establishing performance benchmarks, measuring improvement with time, building individual customer relationships, identifying customers in danger of loss, and improving overall customer satisfaction, loyalty and revenues. If a company is not careful, however, it may become a source of misguided direction, wrong decisions and wasted money.