You Don't Have to Be Perfect
1st part of series about aSPICE misconceptions
psv
1/29/20255 min read


You Don't Have to Be Perfect – Understanding ASPICE Assessments
Recently, I’ve encountered a lot of friction within the ASPICE user community. I believe this stems from misunderstandings about what an ASPICE assessment really is and how it is conducted.
Taking a Step Back – Why Implement ASPICE?
Before diving into the assessment process, let’s take a step back and ask: Why do you even implement ASPICE?
A common but misguided answer is, because the customer wants it. An even worse answer is, because we have to pass the assessment. If your only goal is to get a rating, you’re missing the point. But let’s leave the how of ASPICE implementation for another post and instead focus on what an assessment is and how you are rated.
Understanding the assessment criteria and rating process is essential to avoid unnecessary frustration or disappointment.
What an ASPICE Assessment Is and Is Not
1. Assessment Is About the Process, Not People
An ASPICE assessment evaluates whether you produce the required process outcomes and fulfill its intended purpose. It’s nothing personal. We do not assess individuals or their performance; we assess the system in which they work.
Assessors understand that you may have reasons for certain decisions or omissions. However, the assessment is not about justifying those choices—it’s about verifying process adherence and effectiveness.
2. A Snapshot in Time
An assessment evaluates your project as it currently stands. Future plans or promises to improve do not count toward your rating. This is not about being strict or inflexible—it’s about identifying risks in your current process and giving you the opportunity to address them.
The scope of the assessment is predefined by the assessment sponsor. As assessors, we don’t have unlimited freedom to explore beyond the selected processes and levels. But trust me, we see more than you think.
3. Evidence-Based Evaluation
An assessment is based on objective evidence. You must actively provide answers and demonstrate compliance. Assessors ask questions and interpret results, but you are responsible for supplying the evidence.
The assessment questionnaire is not a mystery—you already know the questions (base and generic practices). You should enter the assessment prepared and have the answers ready. After all, it’s your project and product.
Understanding the Rating Process
Does this sound difficult? Challenging? Demanding? I don’t think so.
Here’s how the rating process works: Assessors evaluate base and generic practices to determine process attributes. This is how we derive the capability level. If you’re interested, I can provide a short lecture on organizing an assessment, but for now, just remember that assessors rates process atributes (by BP/GP compliance) up to the level the sponsor has decided to assess.
At the end, we compile observations and evidence, then interpret the results using the NPLF scale (as defined in ISO/IEC 33020:2019—yes, ASPICE is a based on international standard):
N (0-15%) – Not Achieved: Little or no evidence of meeting the process attribute.
P (16-50%) – Partially Achieved: Some evidence exists, but achievement is unpredictable.
L (51-84%) – Largely Achieved: A systematic approach is in place, but some weaknesses remain.
F (85-100%) – Fully Achieved: A complete, systematic approach with no significant weaknesses.
This should help you understand how assessment results are interpreted.
But We Need to Achieve Level 2 or 3!
I often hear concerns like, “But Petr, our contract requires us to achieve Level 2 or 3! What does this mean for us?” Please be patient, I will return to it.
First, we are rating Process Attributes at the NPLF scale. This is then mapped to the Capability Level scale.
Let’s break it down with simple examples:
Not Achieved (N): You’re missing something. Even if you have an old, unused document or template, you might score 10%, but it’s still N.
Fully Achieved (F): You don’t have to be perfect, just above 85% of what is expected. Mistakes happen, but as long as they are exceptions rather than the norm, it’s fine.
The key distinction is between Partially Achieved (P) and Largely Achieved (L). This hinges on whether your approach is systematic or unpredictable.
Example Scenario:
Imagine a project already running for six months. According to the plan, 60% of the product should be reviewed now, but the objective evidence shows only 20% has been reviewed.
Case 1: Three months ago, one person completed 20% review in a single heroic effort.
Case 2: The review process was slow at first but became systematic: 0% after two months, 5% after three, 10% after four, and now 20%, steadily closing the gap.
Case 1 is rated Partially Achieved because the approach is unpredictable.
Case 2 is rated Largely Achieved because there is a systematic effort. And frankly, if there are no other weeknesses, and will be, lets say 50% out of required 60% we can rate it even Fully.
This is key to understanding ASPICE rating—it’s all about predictability and risk management.
Mapping Attributes to Capability Levels
Each Capability Level has associated Process Attributes, which are rated based on Base Practices (for L1) and Generic Practices (for L2-5).
To achieve a capability level, its process attribute(s) must be better than Partially Achieved, meaning at least 51% and a systematic approach. Not so much, right?
When you want to achieve a higher level, it must be Fully Achieved, which still means only above 85%.
Key Takeaways:
Assessments are objective, evidence-driven, and provide a snapshot in time.
You know the questionnaire in advance—prepare accordingly.
Ratings focus on predictability and risk to final delivery.
Achieving 51% is good enough for L1.
Achieving 85% on L1 and 51% on L2 is good enough for L2.
We don’t expect perfection; we just need to see that deviations are exceptions, not the norm.
Use assessment results for improvement—don’t wait until the project is almost finished to conduct an assessment. (I’ll cover assessment timing in a future post.)
I hope this brings some peace of mind. We’re not bloodthirsty monsters—or are we? 😉
Share your experiences with assessments in the comments!
Petr
Annex: Unachievable Theory? Not at All.
After publishing my last piece, I realized I missed a crucial point: ASPICE compliance is not unachievable. Quite the opposite.
After years of training, assessing, and consulting, I can confidently say that the reality is very different from the common perception. When I assess 10 projects, the pattern is clear:
5 are in the "grey zone"—functional but riddled with inefficiencies, gaps, and "OK, but..." moments.
3 are complete disasters—either faked for assessment, implemented with no real intent to improve, or openly hostile toward the process.
And then, there are 2—the ones that prove me right. These teams demonstrate that ASPICE can work seamlessly without additional cost when implemented for use, not just for assessment.
And these two? They are what keep me motivated. Suddenly, an assessment is no longer an interrogation full of pain. Instead, it becomes an engaging, professional discussion with skilled colleagues—people who genuinely care about software engineering, who exchange insights about great tooling, who share tips and tricks to make our work easier.
Trust me—reality is far better than you might think. You just need a bigger sample size to see it.
AI Development Process Consulting
Achieve product greatness with process greateness
© 2024. All rights reserved.
Petr Švimberský
Bítovská 1219/26
Praha 4, 140 00, Czech Republic
IČ: 71559124
DIČ: CZ8206190564
+420 731 512 401
info@petrsvimbersky.consulting