The App That Almost Launched
She wanted a client management app for her physiotherapy practice โ a way for patients to book sessions, fill out digital intake forms, track their home exercise programs, and message her directly between appointments. She discovered Replit Agent, spent two weekends building it, and felt genuinely proud of the result. It looked clean.
It had all the features she had planned. She submitted it to the App Store and waited. Apple rejected it under Guideline 2.1 (App Completeness) and Guideline 5.1.1 (Data Collection and Storage).
She made the changes she could interpret from Apple's feedback โ and resubmitted. Rejected again. At that point she had spent several hundred pounds in credits, six weekends, and weeks of evenings trying to solve a problem she lacked the technical context to fully understand.
She found us through a Google search, booked a free audit call, and sent us the codebase that same afternoon.
What Apple Actually Looks For in Review
Apple's App Review process has both automated and human stages. The automated stage scans for known security patterns: hardcoded credentials, insecure network calls, missing privacy usage descriptions, and API usage that does not match declared capabilities. The human stage checks that the app works as described, does not crash on a physical device, follows Apple's Human Interface Guidelines, and complies with App Store policies around data collection, privacy, and payment.
AI-generated code passes neither check consistently, for a simple reason: the models are trained to make code that works, not code that is secure and compliant. They do not know App Store guidelines. They do not know your app will be scrutinised before a single user can download it.
That gap between 'works on my device' and 'approved by Apple' is exactly where most AI-built apps fail.
The Six Security Problems We Found
Here is what our audit found. First: the Replit-generated backend had API keys hardcoded directly into the JavaScript source โ including the key for the booking system and the SMS notification service. Anyone who reverse-engineered the compiled app could have extracted and used those credentials.
Second: patient intake form data was stored in localStorage on the device without encryption. For a healthcare application, this is both a security risk and a compliance failure under UK GDPR. Third: the backend API endpoints had no authentication middleware โ they could be called by anyone who knew the URL, without any credential check at all.
Fourth: several internal API routes were making HTTP calls rather than HTTPS, transmitting data over unencrypted connections. Fifth: session management used a simple, predictable token format vulnerable to hijacking. Sixth: there was no input sanitisation on the form fields, leaving the backend exposed to injection attacks.
None of these were intentional choices. The AI built something that worked. It did not build something that was safe.
Why AI Coding Tools Create These Problems
This is not a criticism of AI tools. Replit, Cursor, Bolt.new, and similar platforms are genuinely remarkable at what they do. The issue is what they optimise for.
When you ask an AI to build a booking form, it builds a booking form that works. It does not automatically audit for OWASP security standards. It does not check your compliance requirements.
It does not know that Apple will review your app before users can install it, or that healthcare data in the UK is subject to specific legal protections. The AI is responding to the brief you gave it โ and if your brief did not include 'make this secure and App Store compliant,' neither will the output. The right mental model: AI-generated code is a prototype that needs professional review before it ships to production.
The prototype is valuable. Shipping it unreviewed is the expensive mistake.
The Four Apple Guideline Violations
Beyond the security problems, we found four direct App Store guideline violations. The app was missing NSPrivacyUsageDescription entries in Info.plist for three permissions it requested โ camera access for progress photos, microphone access for voice notes, and location for finding nearby clinics. Apple requires a plain-English usage description for every permission.
Without them, the app is rejected immediately. The UI had layouts that broke on iPhone SE and iPhone 15 Pro Max โ Apple's reviewers test on a range of devices, and broken layouts on any supported size trigger a Guideline 4.0 rejection. The reviewer demo account, required so Apple's team can test the app without real patient credentials, was missing from the App Review Notes in App Store Connect.
And one section of the intake form collected health information without the disclosure language Apple requires for sensitive data collection. Each is individually fixable. Together, they explained both previous rejections completely.
How We Fixed It and Got Her Live in 14 Days
Our process: audit, prioritise, fix, resubmit. Day one and two were the full security and compliance audit with a prioritised fix list โ security issues first, then guideline violations, then UI and metadata. We moved all API keys out of source code into a secure secrets manager.
We added authentication middleware to every backend endpoint. We migrated patient data from localStorage to encrypted server-side storage. We enforced HTTPS on all routes.
We rebuilt session management using JWT with appropriate expiry. We added input sanitisation across all form fields. Then the guideline work: all NSPrivacyUsageDescription strings added, responsive layouts fixed across all device sizes, a test account created and added to App Review Notes, and the health data disclosure language inserted in the correct section.
The resubmission was approved on the first attempt. She was live and taking bookings from patients fourteen days after her first call with us.
The Pre-Submission Checklist for AI-Built Apps
If you have built your app with AI tools and are approaching submission, here is the checklist we run on every audit. One: search your codebase for hardcoded strings that look like API keys, tokens, or secrets โ move every one to environment variables. Two: confirm that every backend endpoint requires authentication before returning or writing any data.
Three: verify all data in transit uses HTTPS โ no HTTP calls in production. Four: open Info.plist and confirm that every permission your app requests has a corresponding NSPrivacyUsageDescription entry in plain English. Five: test on the smallest and largest supported device sizes โ iPhone SE and iPhone 15 Pro Max at minimum.
Six: create a working demo account and enter the credentials in the App Review Notes field in App Store Connect before submitting. Seven: if your app handles health, financial, or personal data, review Apple's data collection disclosure requirements and add the required language. Eight: read your rejection email carefully โ Apple always references a specific guideline number, and that guideline page describes exactly what needs to change.
Nine: do a clean install test on a real physical device immediately before submitting. Ten: if you are not certain, get a review from someone who has been through App Review before you resubmit.
Built something with AI and not sure if it will pass review?
We offer a free first-call code audit. In 30 minutes we will tell you exactly what is likely to cause a rejection and what it would take to fix it โ no commitment required.
Book a Free App AuditFrequently Asked Questions
Will Apple reject my app if it was built with AI?
Not automatically โ but AI-generated code consistently produces patterns that trigger App Store rejection: missing privacy usage descriptions, insecure data handling, hardcoded credentials, and UI issues on certain device sizes. A code audit before submission significantly reduces rejection risk.
What are the most common App Store rejection reasons for AI-built apps?
The most common are: Guideline 2.1 (app crashes or is incomplete on review devices), Guideline 5.1.1 (missing privacy usage descriptions for permissions requested), and Guideline 4.0 (UI issues violating Human Interface Guidelines). Security issues can also trigger rejection under Guideline 5.0.
How do I find security vulnerabilities in AI-generated code?
Start by searching for hardcoded strings that look like API keys or credentials. Check every backend route for authentication requirements. Verify that all data transmission uses HTTPS. Review local storage usage to confirm no sensitive data is stored unencrypted on device. A professional code audit will catch issues that manual review misses.
How long does it take to fix a rejected app and resubmit?
A focused remediation of security and compliance issues typically takes one to two weeks. The resubmission review itself takes 24 to 48 hours. Total time from starting fixes to approval: two to three weeks in most cases, depending on the number and complexity of issues found.
Can I submit an app to the App Store without a developer?
You need an Apple Developer Program membership ($99/year) and access to Xcode on a Mac to archive and upload your build. If your app was generated by an AI platform, you will also need someone to review the code for security and compliance issues before submission โ the platform itself does not do this automatically.