Troubleshooting
This page covers common issues with AI Code Insights, how to interpret the data, and how to inspect what the daemon is doing on a developer’s machine.
Installation issues
Apple notarization warning on macOS
If macOS shows a warning that the installer is from an unidentified developer, the .pkg file may not have passed Gatekeeper validation. Verify the installer was downloaded from Admin → AI Code Insights and is signed. Right-click the .pkg file and select Open to bypass the warning for a verified installer.
Missing permissions on macOS
If repositories are located in protected directories (Desktop, Downloads, Documents), Full Disk Access must be granted to /usr/local/bin/aicodemetricsd. See the Installation guide for details.
EDR or endpoint security conflicts
The daemon’s behavior — a background process that watches file system changes, hooks into developer tools, stores data locally, and makes outbound HTTPS calls — can trigger behavioral alerts in EDR solutions. Pre-configure exclusions for daemon processes and paths before deployment. See Endpoint considerations for the full list of processes and paths to exclude.
MDM profile not applied
If the daemon starts but cannot authenticate, verify that the Configuration Profile with API credentials has been applied to the machine. On macOS, check managed preferences with:
defaults read com.getdx.aicodemetrics
If the output is empty or missing api_url and api_key, the profile has not been applied. See the MDM installation guide for Configuration Profile setup.
Installer logs
If the daemon does not start after installation, check the installer logs:
| Platform | Installer log path |
|---|---|
| macOS | /tmp/aicodemetrics-install.log |
| Linux | /tmp/aicodemetrics-install.log |
| Windows | Run msiexec /i <installer>.msi /l*v install.log to capture a log |
Installed but no data in DX
Hooks not firing
For agents with native integrations, the daemon installs hooks or plugins that capture AI-generated code at the moment it is accepted. If data from a natively-supported agent is missing:
- Verify hooks are installed by checking the agent’s configuration file for AI Code Insights hook entries.
- Restart the daemon with
aicodemetricsd restart— this triggers a re-scan for installed agents. - Check the daemon logs for hook installation errors (see How to inspect locally).
Hook timeouts prevent a slow or unresponsive hook from blocking the agent. If hooks are timing out, the daemon logs a warning. This does not affect the developer’s workflow — the agent continues working, but session data for that interaction may be incomplete.
Repositories not detected
The daemon only monitors repositories whose Git remote origin matches a repository in your organization’s Data Cloud. If a repository is not being monitored:
- Verify the remote: Run
git remote -vin the repository. TheoriginURL must match a repository imported into Data Cloud. - Check for URL mismatches: If developers clone from an internal mirror or SSH alias, the URL may not match the canonical URL in Data Cloud. Use
url_rewritesto map internal URLs to their canonical equivalents. - Wait for the next poll: The daemon fetches the repository list on startup and hourly thereafter. A newly imported repository in Data Cloud may take up to an hour to appear.
Remote development and devcontainers
The daemon must run on the same machine where the code lives. For remote SSH sessions or devcontainer-based workflows, install the daemon on the remote host — not the developer’s local machine. If the remote environment is ephemeral (destroyed after each session), the daemon will need to be reinstalled each time the environment is created, which may make it impractical for some setups.
Network or proxy issues
The daemon communicates with a single DX API host over HTTPS. If the daemon cannot reach the API:
- Verify outbound HTTPS access to your organization’s DX API URL (found under Admin → AI Code Insights).
- Check for proxy or firewall rules that may block the connection.
- Review the daemon logs for
Failed to fetch repositories from APIorAPI authentication failederrors.
Data does not match expectations
Code attribution accuracy depends on the integration method. For agents with native integrations, attribution is based on direct signals from the agent and is highly accurate. For behavioral analysis, the following edge cases apply.
Blank-line attribution
Blank lines inserted as part of an AI-generated code block are attributed to AI. Blank lines added by the developer (spacing, formatting) are attributed to human. The daemon uses context — surrounding code origin and insertion timing — to make this determination. In ambiguous cases, isolated blank lines default to human attribution.
AI deletions
Distinguishing between human-initiated and AI-initiated deletions is challenging with behavioral signals alone. The daemon attributes bulk deletions across multiple files within a short time frame to AI activity. All other deletions are attributed to the developer. For agents with native integrations, deletions are attributed based on direct signals from the agent.
Mixed human and AI edits in the same file
When a developer manually edits AI-generated code before committing, the daemon tracks both the original AI insertion and the developer’s modifications. The similarity score in per-file metrics indicates how much the committed code differs from the original AI output. A score of 1.0 means the code is unchanged; lower scores indicate more developer modification.
Suggested-but-ignored completions
For agents with native hooks, only accepted code is attributed to AI. Suggestions that the developer dismisses or ignores are not counted. For behavioral analysis, the daemon tracks what actually appears in the file — suggested code that is never written to disk is not observed.
Browser-based AI
Code generated in a browser-based agent (ChatGPT, Claude.ai) and pasted into an editor is attributed to the developer. Clipboard data does not include source information, so pasted code — whether from AI, documentation, or Stack Overflow — is treated as human-written.
PRs from before the daemon was installed
AI Code Insights can only classify code changes that occur while the daemon is running. Commits pushed before installation have no AI attribution data. Reports that span a period before and after installation will show 0% AI-generated code for the pre-installation window.
Performance
Resource footprint
The daemon typically uses ~100 MB of memory and minimal CPU. It is event-driven via Watchman rather than polling, so CPU usage is negligible under normal conditions. Memory footprint scales with the number of monitored repositories.
Watch manager limits
The watch manager enforces a limit on the number of simultaneously monitored repositories. When the limit is exceeded, the least recently active repositories are paused. Monitoring resumes automatically when a paused repository becomes active. If developers report missing data for some repositories, the watch manager limit may need adjustment. See Configuration for details.
Log noise
Daemon logs rotate automatically at 25 MB, keeping the 3 most recent files for up to 90 days. If logs are filling up faster than expected, check whether log_level is set to debug in the config file — this is useful for troubleshooting but generates significantly more output. Set it back to info for normal operation.
How to inspect locally
Daemon status
aicodemetricsd status
Shows whether the daemon is running, which repositories are being monitored, and Watchman status.
Log locations
| Platform | Log path |
|---|---|
| macOS | ~/Library/Logs/AI Code Metrics/aicodemetrics.log |
| Windows | $env:LOCALAPPDATA\aicodemetrics\aicodemetrics.log |
| Linux | ~/.aicodemetrics/aicodemetrics.log |
To enable verbose logging:
AI_CODE_METRICS_LOG_LEVEL=debug aicodemetricsd
Or set log_level to debug in the config file.
Querying local databases
The daemon stores data in SQLite databases in ~/.aicodemetrics (macOS/Linux) or $env:LOCALAPPDATA\aicodemetrics (Windows). Open them with any SQLite client to inspect what the daemon has recorded.
- repositories.db: Which repositories are being monitored and their metadata.
- content.db: Classification data, edit events, commit metadata, and AI attribution metrics.
This data remains on the developer’s machine. Only aggregate metrics are transmitted to DX.
Other useful commands
aicodemetricsd --version # Print version information
aicodemetricsd restart # Restart the daemon
aicodemetricsd uninstall-hooks # Remove hooks from agent configs
Filing bugs
When reporting an issue to DX support or your account team, include:
- Daemon version (
aicodemetricsd --version) - Operating system and version
- AI coding agents in use
- Relevant log output (set
log_leveltodebugfirst to capture detailed logs) - Steps to reproduce the issue