To assess the quality of a proposal we need to evaluate whether it is clear and correct for WHO the project is intended, WHY the project should be undertaken (the problem analysis showing the relevance), whether we agree with WHAT is expected to be made available to end-users and WHAT will be done to make that happen (the Results and corresponding Activities) and WHAT not (the Assumptions) and assess HOW the implementing organisations will make it happen (Internal capacity building).
If 4 persons read the same proposal and give the following results on these questions, does that sound familiar? What should you do?
Scores (0 – 10) of the following issues of the same proposal by 4 persons: A – B – C – D were:
WHO? 4 – 6 – 2 – 6,5
WHY? 8 – 4 – 10 – 10
WHAT? 6 – 4 – 7 – 6,5
HOW? 4 – 8 – 6 – 0
Apparently it look like everybody has his / her own interpretation on those concepts and what they look for in the proposal. If that is the case approval of the proposal depends very much on the person who does the assessment….
Interesting that if only one person does the assessment he / she will never realize that othes might judge quite differently. Is that the reason assessments are mostly only done by one person? Do you recognize this?
We advocate to harmonize the understanding of those concepts in order to make the assessment more objective and standardardized.
What experiences do you have with this challenging function in steering the quality of the projects?
Want to know how you can organise such a quality assessment of a proposal?
Leave your name and email and tick the right box of your interest and we will get back to you as soon as possible!


I did the same quality test on the same concept note / draft proposal with the participants of some in-house courses (from the same organization) and got the following scores (1 for poor -10 for excellent):
Top and Middle management:
for WHO? 7, 8, 9, 9, 8, 6, 8, 9, 7 : Average = 7.88
WHY? 6, 7, 8, 7, 7, 5, 7, 7, 5 : Average = 6.55
WHAT? 8, 7, 9, 8, 7, 7, 7, 7, 7 : Average = 7.44
HOW? 7, 7, 9, 8, 8, 6, 7, 7, 9 : Average = 7.55
Operational staff:
for WHO? 9, 8, 8, 6, 8, 6, 5 : Average = 7.14
WHY? 10, 6, 7, 5, 8, 4, 9 : Average = 7.00
WHAT? 10, 8, 7, 8, 6, 8, 7 : Average = 6.75
HOW? 10, 7, 8, 8, 7, 9, 9 : Average = 8.28
Young / new staff:
for WHO? 5, 8, 7, 5, 6, 5, 5, 5, 6, 8 : Average = 6.00
WHY? 7, 4, 6, 3, 8, 5, 3, 2, 7, 5 : Average = 5.00
WHAT? 7, 6, 7, 6, 7, 6, 6, 10, 7, 7 : Average = 6.90
HOW? 2, 4, 6, 4, 7, 8, 8, 10, 7, 6 : Average = 6.20
Interesting findings …. apparently senior staff seems to be less critical and having little variation individually, while the young staff of who some were very critical showing large variations among the group. Why would that be?
After the training many claimed to understand my rather different and much lower scores. With a very weak focus and poorly justified relevance I would score:
WHO = 2; WHY = 2; WHAT = 7, HOW = 5
It is the desire that perception on quality of proposals (scores on quality) are getting closer on individual basis and more harmonious among the different staff levels.
Some more observations:
– The scoring in all groups seem to be more strict regarding WHY than any other criterium (not even “WHO” which you find equally low). I suppose , people are less tolerant towards spending for “not justified cause” or , alternatively, capacity in identifying weakness in problem analysis is more developed in all groups, comparing to their capacity for other analysis
– Note that every group scores its lowest average at a different criterium, namely – the criterium, where people feel more familiar : Managers (people dealing with strategies/goals/accountability) score more strictly the WHY criterium. On the other hand, operational staff is more critical on the HOW performance in the proposal (not forgiving capacity weaknesses) , while junior staff (probably steming from the target groups and field work) are intolerant with poor performance in description of what they best know- the target beneficieries ( the”WHO”).
Is it by chance or is it a rule that every assessor tends to underscore on the criterium, where is more familiar, corresponding to her/ his core business/skill/ expert comfort zone..? My personal experience with fellow-assesors verifies this observation.
If this is true, then manipulation in selection of assessor’s profile and assignment of the proposals, can influence strongly the assessment results.
Dear Erik,
Want to know how you can organise such a quality assessment of a proposal? … yes, would like to know your approach … it is often a painful exercise
Cheers, Hugo