Exposing kubernetes app using AWS Elastic LoadBalancer

0 votes

I have a kubernetes cluster with following service created with Type: LoadBalancer -
(Source reference: https://github.com/kenzanlabs/kubernetes-ci-cd/blob/master/applications/hello-kenzan/k8s/manual-deployment.yaml)

    apiVersion: v1
    Kind: Service
    metadata:
     name: hello-kenzan
     labels:
     app: hello-kenzan
    spec:
     ports:
      - port: 80
        targetPort: 80
     selector:
       app: hello-kenzan
       tier: hello-kenzan
     type: LoadBalancer

    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: hello-kenzan
      labels:
        app: hello-kenzan
    spec:
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            app: hello-kenzan
            tier: hello-kenzan
        spec:
          containers:
          - image: gopikrish81/hello-kenzan:latest
            name: hello-kenzan
            ports:
            - containerPort: 80
              name: hello-kenzan

After I created the service with -

    kubectl apply -f k8s/manual-deployment.yaml
    kubectl get svc

It is showing External-IP as `<pending>`
But since I have created a loadbalancer type, why isnt it creating an ip?

FYI, I can access the app using `curl <master node>:<nodeport>`
Or even I can access it through proxy forwarding.

**UPDATE as on 29/1**

I followed the answer steps as mentioned in this post https://stackoverflow.com/questions/50668070/kube-controller-manager-dont-start-when-using-cloud-provider-aws-with-kubeadm

1) I modified the file "/etc/systemd/system/kubelet.service.d/10-kubeadm.conf" by adding the below command under [Service]

    Environment="KUBELET_EXTRA_ARGS=--cloud-provider=aws --cloud-config=/etc/kubernetes/cloud-config.conf

And I created this cloud-config.conf as below -

   [Global]
    KubernetesClusterTag=kubernetes
    KubernetesClusterID=kubernetes

I am not sure what for this Tag and ID refer to but when I run the below command I can see the output mentioning clusterName as "kubernetes"

    kubeadm config view

Then I did executed,

    systemctl daemon-reload
    system restart kubelet

2) Then as mentioned in that, I added `--cloud-provider=aws` in both kube-controller-manager.yaml and kube-apiserver.yaml

3) I also added below annotation in the manual-deployment.yaml of my application

    annotations:
      service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
https://github.com/kenzanlabs/kubernetes-ci-cd/blob/master/applications/hello-kenzan/k8s/manual-deployment.yaml

Now, when I deployed using `kubectl apply -f k8s/manual-deployment.yaml` the pod itself is not getting created when I checked with `kubectl get po --all-namespaces`

So I tried to remove step 2 above and again did deployment and now pod was getting created successfully. But still it is showing `<pending>` for EXTERNAL-IP when I did `kubectl get svc`

I even renamed my master and worker node to be same as EC2 Instance Private DNS: ip-10-118-6-35.ec2.internal and ip-10-118-11-225.ec2.internal as mentioned in below post and reconfigured the cluster but still no luck.
https://medium.com/jane-ai-engineering-blog/kubernetes-on-aws-6281e3a830fe (under the section : Proper Node Names)

Also, in my EC2 instances, I can see IAM role attached and when I see the details for that, I can see there are 8 policies applied to that role. And in one of the policy I can see this below and many other Actions are there which I am not posting here -

    {
       "Action": "elasticloadbalancing:*",
       "Resource": "*",
       "Effect": "Allow"
    }

I am clueless if some other settings I am missing. Please suggest!

**UPDATE as on 30/1**

I did the below additional steps as mentioned in this blog - https://blog.scottlowe.org/2018/09/28/setting-up-the-kubernetes-aws-cloud-provider/

1) Added AWS tags to all of my EC2 instances (master and worker nodes) as "kubernetes.io/cluster/kubernetes" and also to my security group

2) I havent added apiServerExtraArgs, controllerManagerExtraArgs and nodeRegistration manually in configuration file. But what I did was I reset the cluster entirely using `"sudo kubeadm reset -f"` and then I added this in kubeadm conf file in both master and worker nodes -

    Environment="KUBELET_EXTRA_ARGS=--cloud-provider=aws --cloud-config=/etc/kubernetes/cloud-config.conf

cloud-config.conf -
    [Global]
    KubernetesClusterTag=kubernetes.io/cluster/kubernetes
    KubernetesClusterID=kubernetes

Then executed in both master and worker nodes -

    systemctl daemon-reload
    system restart kubelet

3) Now I created the cluster using below command in master node

    sudo kubeadm init --pod-network-cidr=192.168.1.0/16 --apiserver-advertise-address=10.118.6.35

4) Then I was able to join the worker node to the cluster successfully and deployed flannel CNI.

After this, get nodes showed Ready status.

One important point to note is that there is kube-apiserver.yaml and kube-controller-manager.yaml files in /etc/kubernetes/manifests path.

When I added `--cloud-provider=aws` in both of these yaml files, my deployments was not happening and pod was not getting created at all. So when I removed the tag `--cloud-provider=aws` from kube-apiserver.yaml, deployments and pods were success.

When I did modify the yaml for kube-apiserver and kube-controller-manager, both the pods got created again successfully. But since pods were not getting created, I removed the tag from kube-apiserver.yaml alone.

Also, I checked the logs with `kubectl logs kube-controller-manager-ip-10-118-6-35.ec2.internal -n kube-system`

But I dont see any exceptions or abnormalities. I can see this in last part -

    IO130 19:14:17.444485    1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-kenzan", UID:"c........", APIVersion:"apps/v1", ResourceVersion:"16212", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-kenzan-56686879-ghrhj

Even tried to add this below annotation to manual-deployment.yaml but still shows the same `<Pending>`

    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
Jan 31, 2019 in Kubernetes by Gopi
• 120 points
2,776 views

Actually both my loadbalancer as well as EC2 instances are in same VPC.

From my local machine now I am able to access this URL https://internal-myservices-987070943.us-east-1.elb.amazonaws.com
What I did was - 1) health check was failing in HTTPS 443 port and 2) Installed web server nginx in my EC2 Instance.
So installing nginx and opening SSL port automatically resolved health check issue and I am able to browse the internal LB URL using https.

But still my original problem of creating a loadbalancer using kubernetes svc is not resolved :frowning:
It still shows pending. But my doubt is since both EC2 instance and LB are in same VPC, why isnt traceroute internal-myservices-987070943.us-east-1.elb.amazonaws.com not tracing. I am getting all * * * for all 30 hops. But from my local machine I am able to trace it successfully. So this is the issue why its not creating any external ip ?

1 answer to this question.

0 votes

Hey @Gopi, 

Try creating a LoadBalancer pod maybe nginx and then create a service using this LoadBalancer pod. 

Also about this,

When I added `--cloud-provider=aws` in both of these yaml files, my deployments was not happening and pod was not getting created at all. So when I removed the tag `--cloud-provider=aws` from kube-apiserver.yaml, deployments and pods were success.

It clearly mentions in the documentation,

kube-apiserver and kube-controller-manager MUST NOT specify the --cloud-provider flag. This ensures that it does not run any cloud specific loops that would be run by cloud controller manager. In the future, this flag will be deprecated and removed.

kubelet must run with --cloud-provider=external. This is to ensure that the kubelet is aware that it must be initialized by the cloud controller manager before it is scheduled any work.

Ready to master Kubernetes and take your container orchestration skills to the next level? Join our comprehensive Kubernetes Course today!

answered Jan 31, 2019 by Kalgi
• 52,360 points
Ok to make it simple, I created aws application load balancer myservices and I got following DNS name listed in aws console - internal-myservices-987070943.us-east-1.elb.amazonaws.com

I also has Target Groups created and showing below under Description -
Name as myservices-LB, Protocol as HTTPS , port as 443, Target type as instance, Load Balancer as myservices
Under Targets tab I can see Registered targets showing my Instance ID as i-02dbf9b3a7d9163e7 with Port as 443 and other details.. This instance ID is my ec2 instance which I have configured as master node of my kubernetes cluster.

Now when I try to access LB DNS name directly with the URL - internal-myservices-987070943.us-east-1.elb.amazonaws.com/api/v1/namespace s/default/services
I am getting "This site can't be reached"

Whereas if I proxy forward from my master node instance using kubectl proxy -- address 0.0.0.0 --accept-hosts '.*'
And then if I access directly my master node ip as below I am able to browse -
10.118.6.35:8001/api/v1/namespaces/default/services

Isn't it possible to access kubernetes services deployed either as NodePort or Loadbalancer Type to be accessible using direct AWS Loadbalancer DNS name??
I even tested the connectivity using tracert internal-myservices-987070943.us-east-1.elb.amazonaws.com
And I can successfully reach destination 10.118.12.196 in 18 hops

But from my ec2 master node instance it is not tracing. Normally I have proxy set with this command - "export {http,https,ftp}_proxy=http://proxy.ebiz.myorg.com:80"
And I can access even external urls.
Could this be an issue?
Also I wonder how is it when nginx installed in my EC2 instance is able to access my LoadBalancer but Traceroute is not able to access it.

Is it possible to directly access my service using Loadbalancer which I manually created via AWS console?? Maybe with NodePort or ingress or something..??
Update:

Only these logs I can see related to AWS in controller logs - 1 aws.go:1041] Building AWS cloud-provider 1 aws.go:1007] Zone not specified in configuration file; querying AWS metadata service. Also, I don’t see this policy in my I am role… “Action”: “s3:", “Resource”: [ "Arn:AWS:s3::: kubernetes-” can this be an issue?

Now after a certain amount of time I see this below log started occurring… 1 controller manager.gi:208] error building controller context: cloud provider could not be initialized: could not init cloud provider “aws”: error finding instance i-02dbf9b3a7d9163e7: “error listing AWS instances: “RequestError: send request failed\ncaused by: Post ec2.us-east1.amazonaws.com: dial tcp 54.239.28.168:443 i/o timeout””
This is absolutely right, if you are using an external CCM provider like with k3s, you must set the --cloud-provider to external. Since the provider will not link to your master nodes. Spin up a worker node and connect it to your cluster. During that connection, in k3s, you can specify the ProviderID. This must be done for every instance you wish to be added as a target!

It looks like this in k3s:

--kubelet-arg="provider-id=aws:///$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)/$(curl -s http://169.254.169.254/latest/meta-data/instance-id)"

You can also do this manually by editing the node and setting the spec.ProviderId
Hey @Mikhail, thanks for your contribution to the Edureka Community.

Kindly register yourself so that you can get points for every contribution( answer/question/upvote/downvote/ comment). Top contributors win exciting merchandise.

Cheers!

Related Questions In Kubernetes

0 votes
1 answer
0 votes
1 answer

automating cluster setup and app deplument on kubernetes

Go through Google Cloud Deployment Manager. it automates ...READ MORE

answered Jul 5, 2018 in Kubernetes by ajs3033
• 7,300 points
569 views
0 votes
1 answer

deleting pods using kubernetes replication controller

The pods which are managed by ReplicationController ...READ MORE

answered Jul 24, 2018 in Kubernetes by DareDev
• 6,890 points
1,023 views
+1 vote
1 answer
0 votes
1 answer

Kubernetes nginx-ingress TLS issue

You have to create a secret named test-secret. ➜ ...READ MORE

answered Sep 11, 2018 in Kubernetes by Kalgi
• 52,360 points
1,925 views
0 votes
1 answer

permissions related to AWS ECR

if you add allowContainerRegistry: true, kops will add those permissions ...READ MORE

answered Oct 9, 2018 in Kubernetes by Kalgi
• 52,360 points
1,348 views
0 votes
1 answer

Create LoadBalancer for kubernetes cluster in aws

Hello @Lina, If you're running your cluster on ...READ MORE

answered Oct 8, 2018 in Kubernetes by Kalgi
• 52,360 points
699 views
0 votes
2 answers

Access Kubernetes API using minKube

Try these commands: kubectl proxy --port=8080 You can then ...READ MORE

answered Aug 28, 2018 in Kubernetes by Hannah
• 18,540 points
1,827 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP