Why does Global App Testing focus on "the last mile" of testing
Global App Testing sometimes says that it focuses on "the last mile" of software testing. But why? What does it mean? And how does that enable us to serve our clients better?
What is the last mile problem?
If you don’t already know “the last mile problem” in transport and logistics, you’d be surprised at the enormous amount of money it costs to move a package the final mile of its journey.
Here’s an example. Imagine you need to ship a package from a factory in China to a domestic address in Oakland, California. According to the last mile problem, it would probably end up costing you the same to get the package  6,200 miles from Shenzhen to your depot in Oakland as it would getting your package 1 mile from your depot in Oakland to the domestic address. Or slightly more, according to Accenture – about 53% of the spend goes on that last mile. 
That's an enormous disparity. You can ask your local AI bot more about why transport and logistics suffers a last mile problem, but I'm interested in how it makes us think about productivity in contexts where a workload is partially automated. It can distort the way we measure, report, and think about productivity; and it's a useful idea for managers to get clear when they're thinking about process, including in software testing.
Here's an example. If a metaphorical “last mile” needs to be complete for the task to be done, and it’s 0.1% of the distance and 50% of the effort, does it account for 0.1% of the work or is it 50% of the work? Are we 1000X more productive when we’re flying from Shenzen to Oakland than when we’re walking between depots in Oakland? Or can we only calculate total productivity when the task is complete, across both? (Could this be anything to do with the decoupling of productivity and pay we hear about in some countries, including the UK?) With the advent of LLMs, individual tasks will be more about "the last mile" and individual workloads will be more last-mile-y. Fussing about formatting rather than writing a blog. (Did I write this? Wasn't most of the work in the uploading and formatting? What do you think?)
How is the last mile relevant to software testing?
Software testing obviously has a "last mile." In functional testing, for example, the objective of many quality engineers is to automate nearly every regression test and unit test that they can, and when they're successful, these can be done at enormous speed and scale with a single "click". It's still common to project that nearly all software tests would be automated, although this never quite accounted for things like UX and Usability testing.
Global App Testing generally focuses on the final 1%. Or depending on how you look at it, 50%. Given that the 1% of tests we focus on are the most difficult to conduct in-house, the toughest to automate, and the most urgent to complete a strategic objective, even when the execution process is still manual, the marginal productivity improvement for a company is masisve. Internally, this metaphor can help our manual, real-world, physical testing org from trying to outrun jetplanes. Externally, it lets us think carefully about the value we can bring by optimising the messiest, most “real-life” part of a client’s testing stack.
Is my software test a "last mile test"?
With that in mind, "is it a last mile test" is a great proxy for whether it's sensible to give to a given software test to a crowdtesting company like Global App Testing, rather than to do it in-house, automate it, or not conduct the test at all.
Here are three tests you can apply, and hopefully, get in touch and start testing with Global App Testing.
First, a last mile test should stand between a business and something valuable
The name of a last mile test can be misleading. Last mile testing doesn’t necessarily mean, “pre-release tests”. It only needs to “block” the business from significant value which could be unlocked by the test. This can be a release, or could be something else. But it needs to be acutely relevant to a strategic objective of a business.
So, a last-mile test could be one which legal requires to approve a local launch. It could be a test which is necessary to improve their local signups significantly. It could be a test which blocks them from a commitment like getting compliant with accessiblity regulation or becoming a more inclusive product by next year.  One.  Oneof the big Global AppTesting differentiators is that we've got a deeper strategic engagement with our clients and we're able to identify bugs and feedback which actually matters in terms of outcome. This is a big part of what "last-mile testing" means. 
Second, a last mile test cannot be automated or at least should be very hard to automate 
We don't compete with AI. Why would we? If the test can be done better than we can do it by an AI agent, we would advise a client to take it to an AI tool. The work will be absorbed by the AI and we want the client to get the best possible value they can. The total amount of test work in the economy is growing, including categories which can't be automated:
- Unique perspectives not adequately covered by an AI review
 - Languages where language-based AI agents are weaker
 - Perspectives AI can't cover because they are legally required to be associated with a real human including tests which involve cash
 - Functional tests which involve having a body or eyes or a physical presence
 - Tests which involve supplying unstructured data to AIs, for example biometric data
 - Tests which relate to a volatile or early stage product, and aren't worth the trouble of automating over an unstable product
 
And many, many more. Our thinking on automation is similar to where it was a few years ago. We want to help clients to automate more of their software testing
Third, a last mile test should be difficult to do yourself  
Global App Testing is the best way available to do many kinds of software test. But we know that clients are naturally inclined to try to conduct a test set in house first, and resort to a crowd when they find that approach doesn't work.
When a test is sufficiently inconvenient to do in-house that it "blocks" you from your strategic outcome, that makes it a last mile test. All kinds of reasons drive up that inconvenience: tests which need to take place on certain devices, with certain financial instruments, or tests which require specific expertise like accessibility testing expertise often become “last-mile” because the team just doesn't have the placements to execute them. But generally, last-mile tests are the tests which are the most painful.
Give your last mile tests to Global App Testing and get 50% more productive
So, what's your last mile test? And could Global App Testing help? Our enormous track record of helping businesses to drive better experiences for users in locale means that when we take on the 1% of testing you find most difficult, we can drive up your total productivity by 50%.
Just ask Booking.com, who tell us that we saved their lead QA 70% of her time.
%20(1)-1ae0f.png)
Want help to drive your growth around the world?
We can help you drive global growth, better accessibility, and better product quality at every level.
What is the Role of Manual Functional Testing?