文档章节

使用DaemonSet+Taint/Tolerations+NodeSelector部署Nginx Ingress Controller

WaltonWang
 WaltonWang
发布于 2017/08/25 21:19
字数 2322
阅读 510
收藏 2
  • 在Kuberntes Cluster中准备N个节点,我们称之为代理节点。在这N个节点上只部署Nginx Ingress Controller(简称NIC)实例,不会跑其他业务容器。

  • 给代理节点打上NoExecute Taint,防止业务容器调度或运行在这些节点。

    # 给代理节点打上NoExecute Taint
    $kubectl taint nodes 192.168.56.105 LB=NIC:NoSchedule
    
  • 给代理节点打上Label,让NIC只部署在打了对应Lable的节点上。

    # 给代理节点打上对应Label
    $kubectl label nodes 192.168.56.105 LB=NIC
    
  • 定义DaemonSet Yaml文件,注意加上Tolerations和Node Selector。

    apiVersion: extensions/v1beta1
    kind: DaemonSet
    metadata:
      name: nginx-ingress-lb
      labels:
        name: nginx-ingress-lb
      namespace: kube-system
    spec:
      template:
        metadata:
          labels:
            name: nginx-ingress-lb
          annotations:
            prometheus.io/port: '10254'
            prometheus.io/scrape: 'true'
        spec:
          terminationGracePeriodSeconds: 60
          # 加上对应的Node Selector
          nodeSelector:
            LB: NIC
          # 加上对应的Tolerations
          tolerations:
          - key: "LB"
            operator: "Equal"
            value: "NIC"
            effect: "NoSchedule"
          containers:
          - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.11
            name: nginx-ingress-lb
            readinessProbe:
              httpGet:
                path: /healthz
                port: 10254
                scheme: HTTP
            livenessProbe:
              httpGet:
                path: /healthz
                port: 10254
                scheme: HTTP
              initialDelaySeconds: 20
              timeoutSeconds: 5
            ports:
            - containerPort: 80
              hostPort: 80
            - containerPort: 443
              hostPort: 443
            env:
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: POD_NAMESPACE
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.namespace
            args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --apiserver-host=http://192.168.56.119:8090  # 这个参数很重要,否则NIC会通过kubernetes discovery去找apiserver,导致连接不上apiserver,NIC会一直重启。
    
  • 创建default backend服务, 服务404.

    准备default-backend.yaml。

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: default-http-backend
      labels:
        k8s-app: default-http-backend
      namespace: kube-system
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            k8s-app: default-http-backend
        spec:
          terminationGracePeriodSeconds: 60
          containers:
          - name: default-http-backend
            # Any image is permissable as long as:
            # 1. It serves a 404 page at /
            # 2. It serves 200 on a /healthz endpoint
            image: gcr.io/google_containers/defaultbackend:1.0
            livenessProbe:
              httpGet:
                path: /healthz
                port: 8080
                scheme: HTTP
              initialDelaySeconds: 30
              timeoutSeconds: 5
            ports:
            - containerPort: 8080
            resources:
              limits:
                cpu: 10m
                memory: 20Mi
              requests:
                cpu: 10m
                memory: 20Mi
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: default-http-backend
      namespace: kube-system
      labels:
        k8s-app: default-http-backend
    spec:
      ports:
      - port: 80
        targetPort: 8080
      selector:
        k8s-app: default-http-backend
    
    

    根据default-backend.yaml创建对应的Deployment和Service。

    $ kubectl create -f examples/deployment/nginx/default-backend.yaml
    
  • 根据DaemonSet Yaml创建NIC DaemonSet,启动NIC。

    $ kubectl apply -f nginx-ingress-daemonset.yaml 
    
  • 确认NIC启动成功后,创建测试用的服务。

    $ kubectl run echoheaders --image=gcr.io/google_containers/echoserver:1.8 --replicas=1 --port=8080
    
    $ kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x
    
    $ kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-y
    
  • 创建测试用的Ingress Object。

    定义一下文件:ingress.yaml。

    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: echomap
      namespace: default
    spec:
      rules:
      - host: foo.bar.com
        http:
          paths:
          - backend:
              serviceName: echoheaders-x
              servicePort: 80
            path: /foo
      - host: bar.baz.com
        http:
          paths:
          - backend:
              serviceName: echoheaders-y
              servicePort: 80
            path: /bar
          - backend:
              serviceName: echoheaders-x
              servicePort: 80
            path: /foo
    

    根据ingress.yaml创建对应的Ingress。

    $ kubectl apply -f ingress.yaml
    
  • 查看ingress的代理地址

    [root@master01 nginx]# kubectl describe ing echomap
    Name:			echomap
    Namespace:		default
    Address:		192.168.56.105 #代理地址
    Default backend:	default-http-backend:80 (172.17.0.6:8080)
    Rules:
      Host		Path	Backends
      ----		----	--------
      foo.bar.com	
        		/foo 	echoheaders-x:80 (<none>)
      bar.baz.com	
        		/bar 	echoheaders-y:80 (<none>)
        		/foo 	echoheaders-x:80 (<none>)
    Annotations:
    Events:
      FirstSeen	LastSeen	Count	From			SubObjectPath	Type		Reason	Message
      ---------	--------	-----	----			-------------	--------	------	-------
      51m		51m		1	ingress-controller			Normal		CREATE	Ingress default/echomap
      50m		50m		1	ingress-controller			Normal		UPDATE	Ingress default/echomap
    
    
  • 测试

    [root@master03 k8s.io]# curl 192.168.56.105/foo -H 'Host: foo.bar.com'
    
    
    Hostname: echoheaders-301308589-9l53v
    
    Pod Information:
    	-no pod information available-
    
    Server values:
    	server_version=nginx: 1.13.3 - lua: 10008
    
    Request Information:
    	client_address=172.17.0.8
    	method=GET
    	real path=/foo
    	query=
    	request_version=1.1
    	request_uri=http://foo.bar.com:8080/foo
    
    Request Headers:
    	accept=*/*
    	connection=close
    	host=foo.bar.com
    	user-agent=curl/7.29.0
    	x-forwarded-for=192.168.56.103
    	x-forwarded-host=foo.bar.com
    	x-forwarded-port=80
    	x-forwarded-proto=http
    	x-original-uri=/foo
    	x-real-ip=192.168.56.103
    	x-scheme=http
    
    Request Body:
    	-no body in request-
    
    [root@master03 k8s.io]# 
    [root@master03 k8s.io]# curl 192.168.56.105/bar -H 'Host: bar.baz.com'
    
    
    Hostname: echoheaders-301308589-9l53v
    
    Pod Information:
    	-no pod information available-
    
    Server values:
    	server_version=nginx: 1.13.3 - lua: 10008
    
    Request Information:
    	client_address=172.17.0.8
    	method=GET
    	real path=/bar
    	query=
    	request_version=1.1
    	request_uri=http://bar.baz.com:8080/bar
    
    Request Headers:
    	accept=*/*
    	connection=close
    	host=bar.baz.com
    	user-agent=curl/7.29.0
    	x-forwarded-for=192.168.56.103
    	x-forwarded-host=bar.baz.com
    	x-forwarded-port=80
    	x-forwarded-proto=http
    	x-original-uri=/bar
    	x-real-ip=192.168.56.103
    	x-scheme=http
    
    Request Body:
    	-no body in request-
    
    
  • 附录:查看nginx.conf

    [root@node01 ~]# docker exec -ti 773c5595b0b8 /bin/bash
    root@nginx-ingress-lb-fbgv7:/etc/nginx# cat /etc/nginx/nginx.conf
    daemon off;
    
    worker_processes 2;
    pid /run/nginx.pid;
    
    worker_rlimit_nofile 523264;
    events {
        multi_accept        on;
        worker_connections  16384;
        use                 epoll;
    }
    
    http {
        set_real_ip_from    0.0.0.0/0;
        real_ip_header      X-Forwarded-For;
    
        real_ip_recursive   on;
    
        geoip_country       /etc/nginx/GeoIP.dat;
        geoip_city          /etc/nginx/GeoLiteCity.dat;
        geoip_proxy_recursive on;
        # lua section to return proper error codes when custom pages are used
        lua_package_path '.?.lua;/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/lua-resty-http/lib/?.lua;';
        init_by_lua_block {
            require("error_page")
        }
    
        sendfile            on;
        aio                 threads;
        tcp_nopush          on;
        tcp_nodelay         on;
    
        log_subrequest      on;
    
        reset_timedout_connection on;
    
        keepalive_timeout  75s;
        keepalive_requests 100;
    
        client_header_buffer_size       1k;
        large_client_header_buffers     4 8k;
        client_body_buffer_size         8k;
    
        http2_max_field_size            4k;
        http2_max_header_size           16k;
    
        types_hash_max_size             2048;
        server_names_hash_max_size      1024;
        server_names_hash_bucket_size   32;
        map_hash_bucket_size            64;
    
        proxy_headers_hash_max_size     512;
        proxy_headers_hash_bucket_size  64;
    
        variables_hash_bucket_size      64;
        variables_hash_max_size         2048;
    
        underscores_in_headers          off;
        ignore_invalid_headers          on;
    
        include /etc/nginx/mime.types;
        default_type text/html;
        gzip on;
        gzip_comp_level 5;
        gzip_http_version 1.1;
        gzip_min_length 256;
        gzip_types application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component;
        gzip_proxied any;
    
        # Custom headers for response
    
        server_tokens on;
    
        # disable warnings
        uninitialized_variable_warn off;
    
        log_format upstreaminfo '$the_real_ip - [$the_real_ip] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status';
    
        map $request_uri $loggable {
            default 1;
        }
    
        access_log /var/log/nginx/access.log upstreaminfo if=$loggable;
        error_log  /var/log/nginx/error.log notice;
    
        resolver 192.168.56.201 valid=30s;
    
        # Retain the default nginx handling of requests without a "Connection" header
        map $http_upgrade $connection_upgrade {
            default          upgrade;
            ''               close;
        }
    
        # trust http_x_forwarded_proto headers correctly indicate ssl offloading
        map $http_x_forwarded_proto $pass_access_scheme {
            default          $http_x_forwarded_proto;
            ''               $scheme;
        }
    
        map $http_x_forwarded_port $pass_server_port {
           default           $http_x_forwarded_port;
           ''                $server_port;
        }
    
        map $http_x_forwarded_for $the_real_ip {
            default          $http_x_forwarded_for;
            ''               $remote_addr;
        }
    
        # map port 442 to 443 for header X-Forwarded-Port
        map $pass_server_port $pass_port {
            442              443;
            default          $pass_server_port;
        }
    
        # Map a response error watching the header Content-Type
        map $http_accept $httpAccept {
            default          html;
            application/json json;
            application/xml  xml;
            text/plain       text;
        }
    
        map $httpAccept $httpReturnType {
            default          text/html;
            json             application/json;
            xml              application/xml;
            text             text/plain;
        }
    
        # Obtain best http host
        map $http_host $this_host {
            default          $http_host;
            ''               $host;
        }
    
        map $http_x_forwarded_host $best_http_host {
            default          $http_x_forwarded_host;
            ''               $this_host;
        }
    
        server_name_in_redirect off;
        port_in_redirect        off;
    
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    
        # turn on session caching to drastically improve performance
        ssl_session_cache builtin:1000 shared:SSL:10m;
        ssl_session_timeout 10m;
    
        # allow configuring ssl session tickets
        ssl_session_tickets on;
    
        # slightly reduce the time-to-first-byte
        ssl_buffer_size 4k;
    
        # allow configuring custom ssl ciphers
        ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
        ssl_prefer_server_ciphers on;
    
        ssl_ecdh_curve secp384r1;
    
        proxy_ssl_session_reuse on;
    
        upstream upstream-default-backend {
            # Load balance algorithm; empty for round robin, which is the default
            least_conn;
            server 172.17.0.6:8080 max_fails=0 fail_timeout=0;
        }
    
        upstream default-echoheaders-x-80 {
            # Load balance algorithm; empty for round robin, which is the default
            least_conn;
            server 172.17.0.14:8080 max_fails=0 fail_timeout=0;
        }
    
        upstream default-echoheaders-y-80 {
            # Load balance algorithm; empty for round robin, which is the default
            least_conn;
            server 172.17.0.14:8080 max_fails=0 fail_timeout=0;
        }
    
        server {
            server_name _;
            listen 80 default_server reuseport backlog=511;
            listen [::]:80 default_server reuseport backlog=511;
            set $proxy_upstream_name "-";
    
            listen 442 proxy_protocol default_server reuseport backlog=511 ssl http2;
            listen [::]:442 proxy_protocol  default_server reuseport backlog=511 ssl http2;
            # PEM sha: 3b2b1e879257b99da971c8e21d428383941a7984
            ssl_certificate                         /ingress-controller/ssl/default-fake-certificate.pem;
            ssl_certificate_key                     /ingress-controller/ssl/default-fake-certificate.pem;
    
            more_set_headers                        "Strict-Transport-Security: max-age=15724800; includeSubDomains;";
            location / {
                set $proxy_upstream_name "upstream-default-backend";
    
                port_in_redirect off;
    
                client_max_body_size                    "1m";
    
                proxy_set_header Host                   $best_http_host;
    
                # Pass the extracted client certificate to the backend
    
                # Allow websocket connections
                proxy_set_header                        Upgrade           $http_upgrade;
                proxy_set_header                        Connection        $connection_upgrade;
    
                proxy_set_header X-Real-IP              $the_real_ip;
                proxy_set_header X-Forwarded-For        $the_real_ip;
                proxy_set_header X-Forwarded-Host       $best_http_host;
                proxy_set_header X-Forwarded-Port       $pass_port;
                proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
                proxy_set_header X-Original-URI         $request_uri;
                proxy_set_header X-Scheme               $pass_access_scheme;
    
                # mitigate HTTPoxy Vulnerability
                # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
                proxy_set_header Proxy                  "";
    
                # Custom headers to proxied server
    
                proxy_connect_timeout                   5s;
                proxy_send_timeout                      60s;
                proxy_read_timeout                      60s;
    
                proxy_redirect                          off;
                proxy_buffering                         off;
                proxy_buffer_size                       "4k";
                proxy_buffers                           4 "4k";
    
                proxy_http_version                      1.1;
    
                proxy_cookie_domain                     off;
                proxy_cookie_path                       off;
    
                # In case of errors try the next upstream server before returning an error
                proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;
    
                proxy_pass http://upstream-default-backend;
            }
    
            # health checks in cloud providers require the use of port 80
            location /healthz {
                access_log off;
                return 200;
            }
    
            # this is required to avoid error if nginx is being monitored
            # with an external software (like sysdig)
            location /nginx_status {
                allow 127.0.0.1;
                allow ::1;
                deny all;
    
                access_log off;
                stub_status on;
            }
        }
    
        server {
            server_name bar.baz.com;
            listen 80;
            listen [::]:80;
            set $proxy_upstream_name "-";
            location /foo {
                set $proxy_upstream_name "default-echoheaders-x-80";
    
                port_in_redirect off;
    
                client_max_body_size                    "1m";
    
                proxy_set_header Host                   $best_http_host;
    
                # Pass the extracted client certificate to the backend
    
                # Allow websocket connections
                proxy_set_header                        Upgrade           $http_upgrade;
                proxy_set_header                        Connection        $connection_upgrade;
    
                proxy_set_header X-Real-IP              $the_real_ip;
                proxy_set_header X-Forwarded-For        $the_real_ip;
                proxy_set_header X-Forwarded-Host       $best_http_host;
                proxy_set_header X-Forwarded-Port       $pass_port;
                proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
                proxy_set_header X-Original-URI         $request_uri;
                proxy_set_header X-Scheme               $pass_access_scheme;
    
                # mitigate HTTPoxy Vulnerability
                # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
                proxy_set_header Proxy                  "";
    
                # Custom headers to proxied server
    
                proxy_connect_timeout                   5s;
                proxy_send_timeout                      60s;
                proxy_read_timeout                      60s;
    
                proxy_redirect                          off;
                proxy_buffering                         off;
                proxy_buffer_size                       "4k";
                proxy_buffers                           4 "4k";
    
                proxy_http_version                      1.1;
    
                proxy_cookie_domain                     off;
                proxy_cookie_path                       off;
    
                # In case of errors try the next upstream server before returning an error
                proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;
    
                proxy_pass http://default-echoheaders-x-80;
            }
            location /bar {
                set $proxy_upstream_name "default-echoheaders-y-80";
    
                port_in_redirect off;
    
                client_max_body_size                    "1m";
    
                proxy_set_header Host                   $best_http_host;
    
                # Pass the extracted client certificate to the backend
    
                # Allow websocket connections
                proxy_set_header                        Upgrade           $http_upgrade;
                proxy_set_header                        Connection        $connection_upgrade;
    
                proxy_set_header X-Real-IP              $the_real_ip;
                proxy_set_header X-Forwarded-For        $the_real_ip;
                proxy_set_header X-Forwarded-Host       $best_http_host;
                proxy_set_header X-Forwarded-Port       $pass_port;
                proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
                proxy_set_header X-Original-URI         $request_uri;
                proxy_set_header X-Scheme               $pass_access_scheme;
    
                # mitigate HTTPoxy Vulnerability
                # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
                proxy_set_header Proxy                  "";
    
                # Custom headers to proxied server
    
                proxy_connect_timeout                   5s;
                proxy_send_timeout                      60s;
                proxy_read_timeout                      60s;
    
                proxy_redirect                          off;
                proxy_buffering                         off;
                proxy_buffer_size                       "4k";
                proxy_buffers                           4 "4k";
    
                proxy_http_version                      1.1;
    
                proxy_cookie_domain                     off;
                proxy_cookie_path                       off;
    
                # In case of errors try the next upstream server before returning an error
                proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;
    
                proxy_pass http://default-echoheaders-y-80;
            }
            location / {
                set $proxy_upstream_name "upstream-default-backend";
    
                port_in_redirect off;
    
                client_max_body_size                    "1m";
    
                proxy_set_header Host                   $best_http_host;
    
                # Pass the extracted client certificate to the backend
    
                # Allow websocket connections
                proxy_set_header                        Upgrade           $http_upgrade;
                proxy_set_header                        Connection        $connection_upgrade;
    
                proxy_set_header X-Real-IP              $the_real_ip;
                proxy_set_header X-Forwarded-For        $the_real_ip;
                proxy_set_header X-Forwarded-Host       $best_http_host;
                proxy_set_header X-Forwarded-Port       $pass_port;
                proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
                proxy_set_header X-Original-URI         $request_uri;
                proxy_set_header X-Scheme               $pass_access_scheme;
    
                # mitigate HTTPoxy Vulnerability
                # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
                proxy_set_header Proxy                  "";
    
                # Custom headers to proxied server
    
                proxy_connect_timeout                   5s;
                proxy_send_timeout                      60s;
                proxy_read_timeout                      60s;
    
                proxy_redirect                          off;
                proxy_buffering                         off;
                proxy_buffer_size                       "4k";
                proxy_buffers                           4 "4k";
    
                proxy_http_version                      1.1;
    
                proxy_cookie_domain                     off;
                proxy_cookie_path                       off;
    
                # In case of errors try the next upstream server before returning an error
                proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;
    
                proxy_pass http://upstream-default-backend;
            }
    
        }
    
        server {
            server_name foo.bar.com;
            listen 80;
            listen [::]:80;
            set $proxy_upstream_name "-";
            location /foo {
                set $proxy_upstream_name "default-echoheaders-x-80";
    
                port_in_redirect off;
    
                client_max_body_size                    "1m";
    
                proxy_set_header Host                   $best_http_host;
    
                # Pass the extracted client certificate to the backend
    
                # Allow websocket connections
                proxy_set_header                        Upgrade           $http_upgrade;
                proxy_set_header                        Connection        $connection_upgrade;
    
                proxy_set_header X-Real-IP              $the_real_ip;
                proxy_set_header X-Forwarded-For        $the_real_ip;
                proxy_set_header X-Forwarded-Host       $best_http_host;
                proxy_set_header X-Forwarded-Port       $pass_port;
                proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
                proxy_set_header X-Original-URI         $request_uri;
                proxy_set_header X-Scheme               $pass_access_scheme;
    
                # mitigate HTTPoxy Vulnerability
                # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
                proxy_set_header Proxy                  "";
    
                # Custom headers to proxied server
    
                proxy_connect_timeout                   5s;
                proxy_send_timeout                      60s;
                proxy_read_timeout                      60s;
    
                proxy_redirect                          off;
                proxy_buffering                         off;
                proxy_buffer_size                       "4k";
                proxy_buffers                           4 "4k";
    
                proxy_http_version                      1.1;
    
                proxy_cookie_domain                     off;
                proxy_cookie_path                       off;
    
                # In case of errors try the next upstream server before returning an error
                proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;
    
                proxy_pass http://default-echoheaders-x-80;
            }
            location / {
                set $proxy_upstream_name "upstream-default-backend";
    
                port_in_redirect off;
    
                client_max_body_size                    "1m";
    
                proxy_set_header Host                   $best_http_host;
    
                # Pass the extracted client certificate to the backend
    
                # Allow websocket connections
                proxy_set_header                        Upgrade           $http_upgrade;
                proxy_set_header                        Connection        $connection_upgrade;
    
                proxy_set_header X-Real-IP              $the_real_ip;
                proxy_set_header X-Forwarded-For        $the_real_ip;
                proxy_set_header X-Forwarded-Host       $best_http_host;
                proxy_set_header X-Forwarded-Port       $pass_port;
                proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
                proxy_set_header X-Original-URI         $request_uri;
                proxy_set_header X-Scheme               $pass_access_scheme;
    
                # mitigate HTTPoxy Vulnerability
                # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
                proxy_set_header Proxy                  "";
    
                # Custom headers to proxied server
    
                proxy_connect_timeout                   5s;
                proxy_send_timeout                      60s;
                proxy_read_timeout                      60s;
    
                proxy_redirect                          off;
                proxy_buffering                         off;
                proxy_buffer_size                       "4k";
                proxy_buffers                           4 "4k";
    
                proxy_http_version                      1.1;
    
                proxy_cookie_domain                     off;
                proxy_cookie_path                       off;
    
                # In case of errors try the next upstream server before returning an error
                proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;
    
                proxy_pass http://upstream-default-backend;
            }
    
        }
        # default server, used for NGINX healthcheck and access to nginx stats
        server {
            # Use the port 18080 (random value just to avoid known ports) as default port for nginx.
            # Changing this value requires a change in:
            # https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/nginx/command.go#L104
            listen 18080 default_server reuseport backlog=511;
            listen [::]:18080 default_server reuseport backlog=511;
            set $proxy_upstream_name "-";
    
            location /healthz {
                access_log off;
                return 200;
            }
    
            location /nginx_status {
                set $proxy_upstream_name "internal";
    
                access_log off;
                stub_status on;
            }
    
            # this location is used to extract nginx metrics
            # using prometheus.
            # TODO: enable extraction for vts module.
            location /internal_nginx_status {
                set $proxy_upstream_name "internal";
    
                allow 127.0.0.1;
                allow ::1;
                deny all;
    
                access_log off;
                stub_status on;
            }
    
            location / {
                set $proxy_upstream_name "upstream-default-backend";
                proxy_pass             http://upstream-default-backend;
            }
    
        }
    
        # default server for services without endpoints
        server {
            listen 8181;
            set $proxy_upstream_name "-";
    
            location / {
                return 503;
            }
        }
    }
    
    stream {
        log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time;
    
        access_log /var/log/nginx/access.log log_stream;
    
        error_log  /var/log/nginx/error.log;
    
        # TCP services
    
        # UDP services
    }
    

© 著作权归作者所有

共有 人打赏支持
WaltonWang
粉丝 181
博文 96
码字总数 197020
作品 0
深圳
程序员
加载中

评论(3)

WaltonWang
WaltonWang

引用来自“开源中国首席打酱油啊哎滴”的评论

解决了,在calico-node配置文件加一个
tolerations:
- key: "key"
operator: "Exists"
effect: "NoSchedule"

有Taint就有Toleration。
开源中国首席打酱油啊哎滴
开源中国首席打酱油啊哎滴
解决了,在calico-node配置文件加一个
tolerations:
- key: "key"
operator: "Exists"
effect: "NoSchedule"
开源中国首席打酱油啊哎滴
开源中国首席打酱油啊哎滴
calico不是用k8s装的吗?要是打上NoExecute Taint,calico也会从当前节点删除掉,导致跟集群脱离
kubernetes之Ingress部署

1,如何访问K8S中的服务: 1,Ingress介绍 Kubernetes 暴露服务的方式目前只有三种:LoadBlancer Service、NodePort Service、Ingress;前两种估计都应该很熟悉,下面详细的了解下这个 Ingr...

Flywithmeto
07/02
0
0
如何在阿里云Kubernetes集群中部署多个Ingress Controller

场景说明 在前面如何配置阿里云容器服务K8S Ingress Controller使用私网SLB一文中描述了如何调整阿里云容器服务Kubernetes集群中默认的Nginx Ingress Controller配置使用私网SLB实例,文中提...

chenqz
09/27
0
0
Kubernetes上的负载均衡详解

如果您的应用程序是面向大量用户、会吸引大量流量,那么一个不变的目标一定是在高效满足用户需求的同时、不让用户感知到任何类似于“服务器繁忙!”的情况。这一诉求的典型解决方案是横向扩展...

RancherLabs
09/20
0
0
ingress基础知识和deployment部署

1. ingress简单介绍 ingress由两部分组成:ingress controller和ingress服务。 ingress controller通过和kubernetes api交互,动态的去感知集群中ingress规则变化,然后读取它,按照自定义的...

am2012
07/20
0
0
Kubernetes负载均衡器-Nginx ingress安装

安装Nginx ingress Nginx ingress 使用ConfigMap来管理Nginx配置,nginx是大家熟知的代理和负载均衡软件,比起Traefik来说功能更加强大. 我们使用helm来部署,chart保存在私有的仓库中,请确...

openthings
04/10
0
0

没有更多内容

加载失败,请刷新页面

加载更多

初级开发-编程题

` public static void main(String[] args) { System.out.println(changeStrToUpperCase("user_name_abc")); System.out.println(changeStrToLowerCase(changeStrToUpperCase("user_name_abc......

小池仔
今天
4
0
现场看路演了!

HiBlock
昨天
14
0
Rabbit MQ基本概念介绍

RabbitMQ介绍 • RabbitMQ是一个消息中间件,是一个很好用的消息队列框架。 • ConnectionFactory、Connection、Channel都是RabbitMQ对外提供的API中最基本的对象。Connection是RabbitMQ的s...

寰宇01
昨天
9
0
官方精简版Windows10:微软自己都看不过去了

微软宣布,该公司正在寻求解决方案,以减轻企业客户的Windows 10规模。该公司声称,企业客户下载整个Windows 10文件以更新设备既费钱又费时。 微软宣布,该公司正在寻求解决方案,以减轻企业...

linux-tao
昨天
19
0
TypeScript基础入门之JSX(二)

转发 TypeScript基础入门之JSX(二) 属性类型检查 键入检查属性的第一步是确定元素属性类型。 内在元素和基于价值的元素之间略有不同。 对于内部元素,它是JSX.IntrinsicElements上的属性类型...

durban
昨天
10
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部