FAQs
Below are some of the general issues faced and their relevant steps for quick troubleshooting:
The spinner keeps on spinning when the first analysis is performed
It is highly likely that the node selector labels haven't been set, and thus the analysis pods haven't been scheduled onto the nodes. To investigate this run kubectl describe pod <pod-name> -n atlas-jobs, where <pod-name> can be fetched by running kubectl get pods -n atlas-jobs. Running the former command will return information about the pod. At the last section of the output, you will see something like as shown below
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Created 20m (x352 over 7d7h) kubelet, gke-... Created container ...
Normal Started 20m (x352 over 7d7h) kubelet, gke-... Started container ...
Normal Pulled 20m (x351 over 7d7h) kubelet, gke-... Container image "..." already present on machine
In the case of missing labels, the output will notify you that it wasn't able to match any node selectors. To fix this, run kubectl label nodes <node-name> A=B where A=B is your label key pair which you had entered in the admin console during installation.
Repositories are not visible when attempting to sync
This can be attributed to misconfiguration in the admin console. To fix this, recreate a new GitHub app private key and upload the same to the admin console, and then redeploy.
Unable to view logs for some of the pods
This clearly indicates a firewall issue between the nodes (or VMs), as the CNI (Container Networking Interface) requires the pods to be resolvable with each other over the overlay network across all the nodes. When a firewall is blocking the network, pods from one node or for that matter, the Kubernetes master node, isn't able to communicate with the pods on the other node. To resolve this, please refer to this resource.
All the pods are pending for more than a minute when running kubectl get pods
You will need to add the node selector labels which you had entered in the admin console onto the nodes. Please refer to this resource.
Support bundle generation has been stuck at X% for the last few hours
This usually happens if the previous support bundle generation had crashed. To fix this:
- Run
kubectl get secrets -n <namespace-in-which-deepsource-is-installed>| grep supportbundlein the namespace that DeepSource is installed in. - Delete the secrets starting with
supportbundle-*one-by-one, usingkubectl delete secret <name-of-the-secret> -n <namespace-in-which-deepsource-is-installed>. The last support bundle's state is stored in one of these secrets, hence deleting these secrets lets the Admin Console know that it has to start over. - Once done, go to the Admin Console and click on "Generate a support bundle" again.
If support bundles keep getting stuck even after deleting the secrets, it is most likely because the Admin Console cannot process the support bundle due to resource constraints. In these cases, you can generate a support bundle from your terminal using this command.
Logging in for the first time shows Social Login Failure
Please make sure all the configuration values in the GitHub/GitLab app are appropriately entered. An incorrect value usually results in this error.
Can I use a top level load balancer for routing requests to the deployment?
Yes, you can.
Are backups supported?
Yes, backups are supported as Velero snapshots. Refer to this resource to create snapshots.
I have used self-signed certificates for deployment, do I need to change any other configuration?
Yes, you will need to disable TLS verification from your GitHub app. Note that GitLab does not support TLS verification.
Patching with Kustomize
You can patch DeepSource Enterprise Server with Kustomize. For example, you can customize the number of replicas that you want to use in your environment or specify what nodeSelectors to use for a deployment. Please refer to this resource
I still have a question
Enterprise Support can be reached at enterprise-support@deepsource.io to help you answer any questions/suggestions/bugs.