I. Gateway
1. Nginx (Traffic Gateway)
- Definition : Nginx is a high-performance HTTP and reverse proxy server. Besides load balancing, caching, and static file service, Nginx is also commonly used as an ingress gateway. In a microservice architecture, Nginx can act as a gateway layer to receive external requests and route traffic to backend services.
- Application scenarios : Primarily used for reverse proxying, load balancing, and traffic management. In a Kubernetes environment, Nginx acts as an Ingress controller, managing how external traffic enters the cluster.
- Relationship with the other two : Nginx can act as a traffic gateway in the infrastructure layer, handling communication traffic between clients and microservice architectures. It is usually not directly involved in business logic or API management, but focuses on traffic routing and load balancing.
2. Gateway (Business Gateway)
- Definition : A business gateway (also known as a service gateway) is a gateway architecture for microservices, primarily responsible for traffic forwarding, routing, rate limiting, and authentication between microservices. It typically also involves business-oriented functionalities such as call chain tracing, authentication and authorization, and request aggregation.
- Application scenarios : Business gateways are commonly found in microservice architectures. As an intermediary layer between services, they handle various cross-service requests and responses and can provide some common functions for each microservice, such as authentication, logging, and traffic control.
- Relationship with the other two : Business gateways reside at the application layer and directly handle business-related logic, while Nginx typically only handles traffic routing. API gateways can be considered a specific type of business gateway, focusing on handling API requests.
3. API Gateway
- Definition : An API gateway is a special type of business gateway that focuses on API routing and management. It typically serves as a unified entry point for all API requests, responsible for request routing, load balancing, authentication, rate limiting, request aggregation, and more.
- Application Scenarios : API gateways are typically used in microservice architectures, serving as a centralized entry point to simplify communication between clients and multiple services. By providing a unified API interface and aggregating backend service interfaces, API gateways help simplify the interaction between the frontend and backend.
- Relationship with the other two : API gateways are typically used in conjunction with Nginx (traffic gateway), with the latter handling basic traffic routing, while the API gateway focuses on the business logic of the API layer. API gateways can also serve as part of a business gateway, focusing on API management between microservices.
Relationship summary:
- Nginx is a traffic gateway that focuses on processing network traffic, such as load balancing and reverse proxying. It is responsible for receiving client requests and routing traffic to backend services.
- An API gateway is a business gateway that handles API-specific functions such as routing, load balancing, and authentication. It typically sits between clients and microservices, providing unified management of API requests.
- Business gateways more broadly refer to the intermediary layer used in microservice architectures. In addition to API gateway functions, they also include communication and management functions between other microservices, such as traffic control and service governance.
There are also mutual calls between the business gateway, API gateway, and microservices:
Follow the five-tuple : source IP, source port, destination IP, destination port, protocol
II. The Relationship Between DNS, Domain Names, and IP Addresses
An IP address is a device’s “unique ID card” on the Internet. For example, 180.101.49.11(the IP address of a Baidu server) it consists of a string of numbers that network devices such as computers and routers can directly identify. It is the core identifier for locating the target device during data transmission—just like your house number. The courier (network data packet) must rely on it to deliver accurately.
However, IP addresses are difficult to remember and not intuitive; ordinary people find it hard to recall a website’s address from a string of numbers. This is where domain names come in handy: a domain name is an “easy-to-remember alias” for an IP address, such as www.baidu.com(Baidu’s domain name) or www.jd.com(JD.com’s domain name). It uses combinations of characters that humans can easily understand and remember, such as English and Chinese characters, to replace the complex numerical IP address.
DNS (Domain Name System) acts as the “translator” between the two: when you enter a domain name into your browser to access a website, the computer does not directly recognize the domain name, but first sends a request to the DNS server. After the DNS server finds the IP address corresponding to the domain name, it returns the IP address to your device. The device then establishes a connection with the target server through this IP address, and finally loads the website content.
In summary, the core relationships are: domain name = “easy-to-remember alias” of IP address , DNS = “mapping translation system” between domain name and IP address. The purpose of these three working together is to satisfy the network devices’ need to identify IP addresses and solve the problem of humans remembering IP addresses, making internet access more convenient.
III. Nginx Reverse Proxy
1. Write the docker-compose.ml file and start it.
version: '3.1'
services:
nginx:
image: nginx:latest
container_name: nginx
restart: always
ports:
- "80:80"
volumes:
- /home/hadoop046/software/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- test
networks:
test:
external: true
2. Write the nginx.conf configuration file
# nginx.conf - Simple configuration example
worker_processes 1; # Number of worker processes, usually set to 1 for small applications
pid /var/run/nginx.pid;
events {
worker_connections 1024; # Maximum number of connections per worker process
}
http {
include mime.types; # Include MIME type definitions
default_type application/octet-stream; # Default MIME type
access_log /var/log/nginx/access.log; # Access log location
error_log /var/log/nginx/error.log; # Error log
server {
listen 80; # Listen on port 80
server_name localhost; # Server name, can be a domain name or IP address
# Reverse proxy configuration
location /learn/ {
proxy_pass http://xxx:8080/test; // Target public IP and port
}
# Error page configuration
error_page 404 /404.html;
location = /404.html {
root /usr/share/nginx/html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
3. Nginx load balancing
upstream order {
server ip1:8080;
server ip2:8080;
}
server{
listen 80;
location /learn/{
proxy_pass http://order/test1;
}
}
Weighted load settings
upstream order {
server ip1:8080 weight=90;
server ip2:8080 weight=10;
}
server{
listen 80;
location /learn/{
proxy_pass http://order/test1;
}
}