G2 32 x 32 White Circle

4.7 STARS ON G2

Try our product analytics for free. No card required.

PUBLISHED22 August, 2024
UPDATED27 April, 2026

13 MIN READ

SHARE THIS POST

Why Do My Apps Keep Crashing? A Complete Guide

BY Silvanus Alt, PhD
SHARE THIS POST
why do my apps keep crashing
TABLE OF CONTENTS
  1. Key takeaways
  2. Why apps crash: the five real causes
  3. 15 crash patterns I see over and over
  4. How to actually diagnose why your app keeps crashing
  5. Industry-specific crash considerations
  6. Tools by category
  7. 10 common mistakes teams make when chasing crashes
  8. A crash diagnostics maturity model
  9. Where UXCam fits in your crash workflow

If you're asking "why do my apps keep crashing," the honest answer is usually one of five things: a memory leak, a bad third-party SDK, a network edge case the team never tested, an OS update that broke an assumption, or a device-specific bug nobody reproduced in QA. I've reviewed hundreds of crash investigations across UXCam's 37,000+ installed products, and the pattern I see most often is not exotic. Crashes hide in the gap between what engineers test on a fast Wi-Fi in a well-lit office and what users actually do on a three-year-old Android phone in a basement elevator.

This guide walks through the real causes, how to diagnose them, and how to stop guessing and start watching the exact session that crashed. I'll also cover the 15 patterns I see repeatedly across fintech, retail, and streaming apps, a maturity model your team can grow into, and the tools I actually recommend when a PM asks me for a shortlist.

Key takeaways

  • Most app crashes trace back to memory leaks, unhandled exceptions, bad third-party libraries, or untested device and network conditions, not mysterious "bugs."

  • Crash logs tell you where the app died. Session replay tells you what the user did to get there, which is the part you actually need to fix it.

  • Rage taps and UI freezes usually precede a crash by 10-30 seconds. Catching those signals early gives you a repro path before the stack trace lands.

  • Infrequent updates and stale SDKs are a leading cause of crashes after iOS and Android OS releases each fall.

  • Recora cut support tickets by 142% after UXCam surfaced a press-and-hold gesture users were misreading as a crash. Sometimes the "crash" is a UX problem in disguise.

  • Tara AI, UXCam's built-in analyst, scans sessions automatically and surfaces the specific screens, gestures, and devices driving your crash rate.

Why apps crash: the five real causes

The first sentence every product manager asks me is some version of "why do my apps keep crashing, and why can't engineering just fix it?" The answer usually lives in one of these buckets.

1. Memory leaks and poor memory management

Apps that hold on to objects they no longer need, leak image buffers, or keep database connections open will eventually exhaust the device's available RAM. On mid-range Android devices with 2-4 GB of RAM, this is brutal. iOS is stricter: the OS will simply terminate your process when it crosses a memory threshold, and the user sees a crash.

Watch for retained view controllers, un-recycled bitmaps, observers that are never removed, and background tasks that never complete. Tools like Android Studio's Memory Profiler and Xcode Instruments are the baseline. For React Native and Flutter teams, Flipper and Dart DevTools surface the same data across the bridge.

2. Unhandled exceptions on poor network conditions

Your QA team tested on office Wi-Fi. Your user is on a train, in an elevator, or on a congested cell tower at 5 PM. Requests time out, sockets drop mid-payload, and any code path that assumes "the response will always arrive" will throw.

This is the single most common crash pattern I see in fintech and food delivery apps. A checkout API call fails, the JSON parser hits null, and the app dies mid-transaction. According to Statista's mobile performance data, users abandon apps after a single crash, so a weak network path becomes a retention problem fast. Google's Network Link Conditioner on iOS and Android Studio's network throttling let you simulate 2G, Edge, and lossy 3G before you ship.

3. Third-party SDKs and library incompatibilities

Every SDK you add is a surface area you don't fully control. Analytics SDKs, ad networks, attribution tools, payment providers, crash reporters (ironic, I know): any of them can ship a version that conflicts with another library, break on a new OS, or introduce a native crash that bubbles up as yours.

The Equifax breach in 2017 was a famous case of an unpatched Apache Struts vulnerability, and the same maintenance discipline applies to mobile. Pin versions, read changelogs, and run Software Composition Analysis on every build. Snyk and GitHub Dependabot will flag outdated mobile dependencies automatically.

4. Device and OS fragmentation

Android device diversity means your app runs on thousands of hardware and OS combinations. iOS is simpler but not simple: a new iOS release every September routinely breaks assumptions about safe areas, permissions, or background execution.

If you only test on the latest two iPhones and a Pixel, you are shipping blind to the 60% of your users on older hardware. Device farms like Firebase Test Lab, BrowserStack App Live, and AWS Device Farm make it cheap to run smoke tests on real hardware you do not own.

5. Infrequent updates and technical debt

Apps that go months without updates accumulate risk. OS vendors deprecate APIs, security patches go unapplied, and SDK vendors stop supporting old versions. Meta ships weekly updates to Facebook for exactly this reason, not just features.

If your release cadence is quarterly, your crash rate will climb whether you deploy new code or not.

15 crash patterns I see over and over

After enough investigations, the same mistakes start to repeat. Here are the specific patterns I flag first when a team shares a crash dashboard with me.

1. Null pointer on async response

The network call returns after the user has already navigated away. The view is deallocated, the callback fires, and the reference is null. Always check view lifecycle state before touching UI in completion handlers, and cancel in-flight requests on

or
.

2. Main thread blocked by image decoding

Loading a full-resolution product photo synchronously on the UI thread will freeze the app, trip an Android ANR, and often cascade into a crash. Offload decoding to libraries like Glide, Coil, SDWebImage, or Kingfisher.

3. Background-to-foreground state loss

Android aggressively kills backgrounded apps to reclaim memory. When the user returns, your activity is recreated but your ViewModel may not be. Read Google's state saving guidance and test by toggling "Don't keep activities" in developer options.

A push notification deep-links into a screen that assumes an authenticated user, and the session has expired. Always validate auth state at the deep-link entry point and route to login when required, not three screens deep when the query fails.

5. Unchecked force-unwrap in Swift

Every

in Swift is a crash waiting to happen. SwiftLint rules like
and
will flag these in CI. Treat optionals with
and
.

6. Out-of-memory on large list rendering

Rendering 10,000 items in a single list view without recycling will OOM on low-end devices. Use RecyclerView on Android,

with diffable data sources on iOS, and
with
tuning in React Native.

7. Concurrent modification during iteration

Mutating a collection while another thread iterates it throws

on Android and undefined behaviour on iOS. Wrap shared state in actors (Swift), use
, or lean on Kotlin coroutines with StateFlow.

8. WebView memory leaks

Embedded WebViews are a notorious source of leaks. Clear history, cookies, and cache on destroy, and destroy the WebView explicitly. Google's WebView best practices cover the lifecycle pitfalls.

9. SSL pinning failure after cert rotation

If you pin certificates and forget to rotate the pin before the backend cert expires, every user session fails, and many apps crash rather than fail gracefully. Pin to public key rather than leaf cert, and monitor expiration dashboards.

10. Push notification payload parse error

A malformed payload from marketing tooling lands in production, the payload parser throws, and the notification handler crashes. Wrap push handling in defensive try/catch and log unexpected payloads to your crash reporter as non-fatals.

11. Third-party SDK static initialiser crash

The SDK crashes during app startup, before your code even runs. These are invisible to most analytics. Isolate SDK init inside try/catch, feature-flag new SDKs, and stage rollouts through Firebase Remote Config or LaunchDarkly.

12. Date and timezone edge cases

Daylight saving transitions, leap years, and users who manually set their phone clock to 2036 will break date math. Use timezone-aware APIs, test against java.time and

, and validate inputs.

13. Keyboard and IME crashes on specific locales

Samsung keyboards in Korean, Chinese handwriting IMEs, and Gboard voice input have all produced reproducible crashes in apps that assume Latin input. Test on regional devices or use BrowserStack's regional device lab.

14. Accessibility service interaction

TalkBack and VoiceOver interact with view hierarchies in ways sighted testing never exercises. I've seen crashes triggered only when an accessibility service sends a focus event to a recycled view. Include accessibility testing in QA.

15. In-app purchase receipt validation

Receipt parsing, StoreKit 2 migrations, and Google Play Billing v6 have all shipped with quirks that crash apps mid-purchase. RevenueCat's engineering blog documents most of the sharp edges.

How to actually diagnose why your app keeps crashing

Here is the workflow I walk every new product team through.

1. Start with crash logs, but do not stop there

Crash logs from Firebase Crashlytics, Sentry, or Bugsnag give you stack traces, device models, OS versions, and frequency. That tells you where the crash happened in code.

What they do not tell you is what the user tapped, how long they hesitated, whether they rage-tapped, or which screen sequence led to the crash. That context is where 80% of the fix time goes.

2. Watch the session that crashed

This is where UXCam session replay changes the economics of crash investigation. Every session is recorded. When a crash fires, you can replay the exact taps, gestures, scroll behaviour, and screen transitions leading up to it. No guessing, no "can you reproduce this?" back-and-forth with support.

Daniel Lee, Senior PM at Virgin Mobile, put it bluntly: "UXCam highlighted issues I would have spent 20 hours to find."

3. Check for rage taps and UI freezes before the crash

Most crashes are preceded by user frustration signals. A button does not respond, so the user taps five times (a rage tap). A screen hangs for 3 seconds (a UI freeze). Then the app dies.

UXCam's issue analytics captures these automatically. Recora, the medical calculator app, reduced support tickets by 142% after spotting a press-and-hold gesture users were interpreting as a crash. The app was working; the UX was not. Only session replay surfaced that distinction.

Inspire Fitness used the same signals to cut rage taps by 56% and boost time-in-app by 460%.

4. Reproduce the crash with real session context

Once you have the replay, reproduce the path on the same device model and OS version. WeTransfer famously used Firebase Crashlytics breadcrumbs to isolate an iOS 11 iPhone-specific crash this way. Combine breadcrumbs with a video of the actual session and your repro time collapses.

5. Rule out memory leaks methodically

Profile memory usage across a 10-15 minute session on a low-RAM device. Look for steadily rising memory that never drops after navigation, spikes when loading image-heavy screens, bitmaps or video buffers not released on screen destroy, and retained closures or observer chains in Swift or Kotlin. If memory grows and never shrinks, you have a leak. Fix it before you ship another feature. LeakCanary on Android makes this nearly automatic in debug builds.

6. Audit third-party libraries

Pull your dependency manifest. For every SDK, check whether it is on the latest stable version, whether its latest release note mentions crash fixes, whether the vendor is still shipping updates, and whether Google Play Console or App Store Connect attributes crashes to its namespace. Kill anything unmaintained. Consolidate overlapping SDKs.

7. Optimise error handling and graceful degradation

Wrap network calls, JSON parsing, and database writes in proper error handling. Show the user a clear, friendly state instead of a dead screen. Log every caught exception to your analytics so you can monitor handled-but-ugly paths before they become crashes.

Industry-specific crash considerations

Not every vertical crashes for the same reasons. Here is what I watch for in the industries I work with most.

Fintech and banking

Regulatory logging, biometric auth flows, and 3D Secure redirects are the usual suspects. I have seen banking apps crash simply because a user received an SMS code, switched apps to read it, and came back to a deallocated session. Test app-switching flows aggressively, and make sure FIDO2 and biometric prompts handle cancellation gracefully. Audit logging SDKs often add startup latency that masks deeper crashes.

E-commerce and retail

Product detail pages with dozens of high-res images, AR try-on features, and third-party review widgets all inflate memory. Checkout is where most revenue-critical crashes happen, usually on a network hiccup during payment. Instrument the funnel with UXCam funnels so you can see exactly where the drop-off hits, and treat every crash between "add to cart" and "order confirmation" as P0.

Streaming and media

Video decoders, DRM libraries, and Chromecast or AirPlay integrations account for the majority of streaming app crashes. Background audio playback on iOS has its own lifecycle that trips up new teams. Follow Google's ExoPlayer guidance and iOS AVFoundation lifecycle carefully.

Gaming

Graphics memory, OpenGL and Metal context loss on backgrounding, and in-app purchase receipt crashes dominate. Games crash disproportionately on mid-range Android GPUs. Unity's crash diagnostics and Unreal's stack walker are table stakes.

Healthcare and regulated apps

HIPAA-adjacent apps often ship with heavy encryption on local storage, which can crash when the device is low on resources. Data sync conflicts with offline-first architectures are another hotspot. Recora's experience is a reminder that in healthcare, a misread gesture can look like a crash, and users will file support tickets either way.

Ride-hailing and logistics

Location services, background updates, and map SDK memory are the pain points. Apps that poll GPS aggressively without respecting Doze mode on Android will be killed by the OS and look like crashes to users. Batch location updates and respect battery optimisation settings.

Tools by category

When teams ask me what to install, I group tools by the job they do.

Crash reporting

Firebase Crashlytics is the default and free. Sentry is stronger if you want unified mobile and backend crash data. Bugsnag and Instabug both have strong stability scoring.

Session replay and product analytics

UXCam is the platform I work on and the one I recommend for teams that want session replay, heatmaps, issue analytics, and product analytics in one place across mobile and web.

Memory and performance profiling

Android Studio Profiler, Xcode Instruments, LeakCanary, and Flipper cover the local debugging story. Firebase Performance Monitoring handles production trace collection.

Dependency and security scanning

Snyk, Dependabot, and OWASP Dependency-Check catch outdated libraries and known CVEs before they ship.

Device testing

Firebase Test Lab, BrowserStack App Live, Sauce Labs, and AWS Device Farm give you access to real devices at scale.

Release management

Firebase App Distribution, TestFlight, and LaunchDarkly or Firebase Remote Config for feature flagging reduce blast radius when you ship.

10 common mistakes teams make when chasing crashes

1. Chasing the top stack trace only

The loudest crash is not always the most expensive. A crash affecting 50 users in your checkout flow is more urgent than one affecting 500 users on a marketing screen.

2. Ignoring non-fatal exceptions

Caught exceptions that show the user a broken screen are not "handled" in any meaningful way. Log them and treat them as pre-crashes.

3. Testing only on flagship devices

If your QA lab only has iPhone 15 Pros and Pixel 9s, you are systematically blind to the hardware most of your users run.

4. Skipping iOS and Android beta cycles

Every team that skips June-August betas pays for it in September when OS launches surface regressions in production.

5. Trusting crash-free session rate alone

Measure crash-free users as well. One user hitting three crashes is a retention disaster the session metric masks.

6. No rollback plan

If you cannot roll back a release in under an hour, you will ship through crashes rather than revert. Use staged rollouts on both stores.

7. Treating third-party crashes as "not our problem"

Users do not care whose SDK crashed. If it shipped in your binary, it is your crash.

8. Ignoring ANRs as separate from crashes

Android Play Console Vitals penalises ANRs alongside crashes. They hurt store ranking and retention equally.

9. Not capturing breadcrumbs

A stack trace without the last 20 user actions is a guessing game. Enable breadcrumbs in Crashlytics or Sentry and integrate with UXCam session replay for the visual layer.

10. Fixing symptoms, not root causes

Wrapping every crash in try/catch and calling it fixed just pushes the failure somewhere else. Find the root cause, fix it, then add defensive code.

A crash diagnostics maturity model

Teams do not go from chaos to clean in one sprint. Here is the progression I coach toward.

Level 1: Reactive

Crashes are discovered through user complaints or app store reviews. No crash reporter installed, or installed but not monitored. Stability is unknown. Goal: install Crashlytics or Sentry and start measuring.

Level 2: Instrumented

Crash reporter live, stack traces flowing, someone triages once a week. Crash-free session rate is a known number. Goal: reach 99% crash-free sessions and establish a response SLA.

Level 3: Contextual

Add UXCam session replay and issue analytics alongside crash reporting. Every crash ticket includes a replay link. Rage taps and UI freezes are triaged before they escalate to crashes. Goal: cut mean time to diagnose by at least 50%.

Level 4: Proactive

Pre-release stability gates on staged rollouts. OS beta testing from June onward. Memory and ANR budgets tracked per release. Tara AI clusters crash patterns and surfaces emerging issues before they hit the top of the dashboard. Goal: 99.9% crash-free users.

Level 5: Predictive

Machine learning flags risky code paths before release. Session replay plus funnels quantify the revenue cost of each crash cluster, and engineering prioritises fixes by dollar impact rather than volume. Housing.com's jump from 20% to 40% feature adoption came from this level of discipline applied to UX friction, and the same approach works for crashes.

Where UXCam fits in your crash workflow

UXCam is a product intelligence and product analytics platform installed in 37,000+ products, across mobile apps and the web. For crash diagnostics, the pieces that matter most:

  • Session replay: watch the exact user session that ended in a crash

  • Heatmaps: see which UI elements users tap, miss, or rage-tap before a failure

  • Issue analytics: automatic detection of crashes, UI freezes, and rage taps with device and OS breakdowns

  • Tara AI: our AI analyst scans every session, clusters crash patterns, and recommends specific actions, so you do not have to manually sift through thousands of replays

  • Funnels and retention analytics: quantify how much revenue the crash is actually costing you

Housing.com used this stack to grow feature adoption from 20% to 40%. Costa Coffee raised registrations by 15% after clearing friction points surfaced in session replay. The same diagnostic muscle applies to crashes.

Frequently asked questions

Why do my apps keep crashing on my phone specifically?

If crashes are concentrated on your device, the cause is usually local: low storage (below 1 GB free), outdated OS, a corrupted app cache, or a background process eating RAM. Start by restarting the device, clearing the app's cache through system settings, updating the OS and the app to the latest versions, and freeing up storage. If only one app crashes, reinstall it. If multiple apps crash, the device or OS is the common factor. For Android, safe mode helps isolate whether a third-party app is the culprit. For iOS, check Settings > Privacy & Security > Analytics & Improvements > Analytics Data for crash logs.

What's the difference between a crash, a freeze, and an ANR?

A crash is when the app process terminates unexpectedly, usually because of an unhandled exception or memory violation. A UI freeze is when the app stops responding to input for a noticeable window (often 1-5 seconds) but eventually recovers. An ANR ("Application Not Responding") is Android's specific term for the main thread being blocked long enough that the OS offers the user a "wait or close" dialog, typically after 5 seconds. All three hurt retention, but they have different root causes. Crashes are usually code defects, freezes are usually main-thread work that should be async, and ANRs are a severe freeze the OS has decided to surface.

How do I find crash logs on my own device?

On iPhone, go to Settings > Privacy & Security > Analytics & Improvements > Analytics Data and look for entries starting with the app's name. On Android, enable Developer Options, then use adb logcat from a connected computer, or check Google Play Console if you own the app. For users reporting crashes, ask them to reproduce the issue and immediately check these logs, or install a crash reporter like Firebase Crashlytics or Sentry in your app so the logs are automatically collected and centralised for your team.

Can UXCam replace Firebase Crashlytics or Sentry?

They solve different problems, and most of the teams I work with use both. Crashlytics and Sentry give you the stack trace, breadcrumbs, and crash frequency. UXCam gives you the session replay, user context, rage taps, UI freezes, and the product-level impact. A stack trace tells engineering where to patch. A session replay tells product why the user hit that path in the first place, and whether the "crash" is actually a UX defect users are mistaking for one, as we saw with Recora's 142% support ticket reduction.

How many crashes per session is acceptable?

The mobile industry benchmark is a crash-free user rate above 99%, meaning fewer than 1 in 100 user sessions ends in a crash. Top-tier apps (Google, Meta, Netflix) run above 99.9%. If your crash-free rate is below 99%, you have a retention problem whether you see it yet or not, because users abandon apps after one crash. Track it by user, not by session, because a single user hitting three crashes in a row is three times as likely to churn as three different users hitting one each.

Why do apps crash more after an OS update?

OS updates routinely deprecate APIs, tighten permission models, change safe-area layouts, and modify background execution rules. If your app relies on an older behaviour, it will break. This is why every September (iOS release) and late-summer (Android release) brings a spike in crash reports across the ecosystem. The fix is proactive: run your app against OS betas from June onward, update dependent SDKs as vendors ship compatible versions, and have a release ready the week of the OS launch. Teams that skip beta testing always pay for it in September.

What is a realistic target for ANR rate on Android?

Google Play flags apps with an ANR rate above 0.47% of daily sessions as having bad behaviour in Play Console Vitals. A healthy target is under 0.1%. If you are above the threshold, Play may reduce your app's visibility in search and recommendations, which compounds the retention hit.

Should I feature-flag every new SDK integration?

Yes, especially for anything that touches app startup, payments, or auth. A feature flag lets you disable a misbehaving SDK remotely without shipping a new build to the stores. Firebase Remote Config and LaunchDarkly both handle mobile well. The rollback speed has saved more than one release for teams I have worked with.

How do I know if a crash is caused by my code or a third-party SDK?

Look at the stack trace. If the frames sit inside your package namespace, it is likely your code. If they sit inside a vendor's namespace (for example, com.facebook.* or FBSDKCore), start there. Crashlytics and Sentry both let you filter by library. Be aware that native crashes can originate in a vendor SDK but only surface in your code path, so reproduce before blaming.

Do obfuscation and ProGuard affect crash diagnosis?

They help your app's binary size and security but make stack traces unreadable until you upload mapping files. Both Crashlytics and Sentry accept ProGuard and R8 mappings. Automate mapping upload in your CI pipeline so you never stare at obfuscated traces during an incident.

What about Flutter, React Native, and other cross-platform crashes?

Cross-platform frameworks introduce an extra layer of complexity because crashes can occur in native code, bridge code, or framework code. Sentry and Crashlytics both have first-class Flutter and React Native support. UXCam's React Native SDK and Flutter SDK capture session replay across the bridge so you see the full user journey regardless of where the crash originated.

How often should my team review crash dashboards?

Daily triage for any app with over 10,000 DAU. Weekly review of trends and top clusters for smaller apps. After every release, spend 48 hours watching the crash rate closely before promoting the rollout from 10% to 100%. Tara AI can do the heavy lifting by surfacing anomalies so your team is not manually scanning every morning.

How do I justify investment in crash tooling to leadership?

Tie crashes to revenue. Use funnels to measure the drop-off at the crashing step, multiply by average order value or LTV, and present the annualised cost. Teams that frame stability as a revenue protection initiative get budget faster than teams that frame it as engineering hygiene. Recora's 142% support ticket reduction and Inspire Fitness's 460% time-in-app lift are the kinds of numbers that win those conversations.

AUTHOR

Silvanus Alt, PhD

Founder & CEO | UXCam

Silvanus Alt, PhD, is the Co-Founder & CEO of UXCam and a expert in AI-powered product intelligence. Trained at the Max Planck Institute for the Physics of Complex Systems, he built Tara, the AI Product Analyst that not only analyzes user behavior but recommends clear next steps for better products.

Dr. Silvanus Alt
PUBLISHED 22 August, 2024UPDATED 27 April, 2026

Try UXCam for Free

"UXCam highlighted issues I would have spent 20 hours to find."
- Daniel Lee, Senior Product Manager @ Virgin Mobile
Daniel Lee

What’s UXCam?

Autocapture Analytics icon
Autocapture Analytics
With autocapture and instant reports, you focus on insights instead of wasting time on setup.
Customizable Dashboards
Customizable Dashboards
Create easy-to-understand dashboards to track all your KPIs. Make decisions with confidence.
icon new revenue streams (16)
Session Replay & Heatmaps
Replay videos of users using your app and analyze their behavior with heatmaps.
icon new revenue streams (17)
Funnel Analytics
Optimize conversions across the entire customer journey.
icon new revenue streams (18)
Retention Analytics
Learn from users who love your app and detect churn patterns early on.
icon new revenue streams (19)
User Journey Analytics
Boost conversion and engagement with user journey flows.

Start Analyzing Smarter

Discover why over teams across 50+ countries rely on UXCam. Try it free for 30 days, no credit card required.

Trusted by the largest brands worldwide
naviclassplushousingjulobigbasket