in an attempt to configure a test agent to run in our Kubernetes cluster
However we can’t get it working, I dug into the issue and I found the following
when the image startups up I’m met with this rather obscure error
Using Agent Alias: My First k8s Agent
Marking Agent as temporary...
Failed obtaining agent configuration!
Error code:
Request ID: N/A
Error message: N/A
some debug info from the container itself while it’s running
/opt/testproject/agent $ cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.15.0
PRETTY_NAME="Alpine Linux v3.15"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
/opt/testproject/agent $ curl -v -s -i -X POST "https://api.testproject.io/v2/agents/config/v2/agents/config" -H "accept: application/json" -H "Authorization: <redacted>"
* Could not resolve host: api.testproject.io
* Closing connection 0
it’s not starting due it a dns error Could not resolve host: api.testproject.io
I see the agent images “testproject/agent:latest” are based on alpine 3.15
which has a known issue with DNS not working correctly inside Kubernetes
see :
I have confirmed that the API call works from a Debian based image inside the same cluster.
Could we possibly get a Debian based agent image so we can get this running in a kubernetes cluster?
Hello,
Can’t confirm the same.
Tested on Kubernetes v1.22.5 from Docker Desktop.
Tested on Kubernetes v 1.21 AWS Eks
Seem The described issue is more related to kind
I cannot replicate it in minikube locally either running.
however I can replicate it reliably using alpine:3.15 running in our GKE clusters ( k8s 1.22.6-gke.300)
funnily enough older version of alpine do work. I have tested connectivity from an alpine:3.1 image and it works fine
Anyway, Alpine images have a long history of being problematic in kubernetes. it’s even called out in the google docs themselves when troubleshooting dns problems