From Fedora Project Wiki
 
(14 intermediate revisions by 7 users not shown)
Line 1: Line 1:
{{header|infra}}
{{header|infra}}


= Infrastructure Architecture =
= Global Presence =


This page is an overview of the network architecture.
Fedora Infrastructure Network spans multiple continents. Datacenters are present in North America, UK and Germany. List of Datacenters goes as follows:


== Front End ==
# iad2 - main Datacenter in Ashburn, VA, USA
# phx2 - previous main Datacenter in Phoenix, AZ, USA
# rdu2 - Raleigh, NC, USA
# tummy - Colorado, USA
# osuosl - Oregon, USA
# bodhost - UK
# ibiblio - North Carolina, USA
# internetx - Germany
# colocation america - LA, USA
# dedicated solutions - USA
# host1plus - Germany


This is a view of our network from the outside up to the application layer.
= Network Topology =


[[Image:Infrastructure_Architecture_frontend.png]]
This section shows how our severs are interconnected or connected to the outside world.


== Proxy View ==
[[Image:FINTopology.png|border|thumb|center|650px|alt=Infrastructure Network Topology|Infrastructure Network Topology]]


This shows whats going on in the proxies.
= Network Architecture =


[[Image:Infrastructure_Architecture_proxy.png|border|upright=0.25|alt=Infrastructure Proxy Server|Infrastructure Proxy Server Diagram]]
Following diagram shows overall network architecture. [https://fedoraproject.org/ fedoraproject.org] and [http://fedoraproject.org/wiki/Infrastructure/Services admin.fedoraproject.org] are round robin <code>DNS</code> entries. They are populated based on geoip information. For example, for North America they get a pool of servers in North America. Each of those servers in <code>DNS</code> is a proxy server. It accepts connections using <code>Apache</code>. <code>Apache</code> uses <code>HAProxy</code> as a backend, and in turn some(but not all) services use <code>varnish</code> for caching. Requests are replied to from cache if <code>varnish</code> has it cached, otherwise it sends into a backend application server. Many of these are in the main datacenter in iad2 and some are at other sites. The application server processes the request and sends it back.
[[Image:FINArchitecture.png|border|thumb|center|650px|alt=Infrastructure Network Architecture|Infrastructure Network Architecture]]


== Application Layer ==


This is a generic view of how our applications work.  Each application may have its own design, but the premise is the same.
== Proxy View ==


[[Image:Infrastructure_Architecture_applicationLayer.png]]
This shows whats going on in the proxies. Incoming <code>DNS</code> balanced user application requests hits <code>Apache httpd</code> in proxy server. Apache forwards request to <code>HAProxy</code>, which load balances requests over the app servers. Some of them reaches over <code>VPN</code>. An example of external source is [http://www.fedoraproject.org/people/ fedoraproject.org/people/] which is a proxy pass to [http://people.fedoraproject.org/ people.fedoraproject.org] hosted at Duke. In some cases there is also <code>varnish</code> between <code>HAProxy</code> and the app servers to help cache information. Local requests use standard alias in the <code>apache configs</code>.
[[Image:FINProxyLayer.png|border|thumb|center|650px|alt=Infrastructure Proxy Server Flow Chart|Infrastructure Proxy Server Flow Chart]]


== Global Network ==
== Application Layer ==


This is the overall view of our global network
This is a generic view of how our applications work. Each application may have its own design, but the premise is the same. Incoming requests are load-balanced from proxy server and reaches to appropriate service box. All application servers in the clustered services area must be identical. If an exception is made it must get moved to solo services box. Most solo services will be one-offs or proof of concept(test) services. Most commonly our single point of failure lie in the data layer.
[[Image:Infrastructure_Architecture_GlobalNetwork.png|http://git.fedoraproject.org/git/fedora-infrastructure.git/architecture/]]
[[Image:FINApplicationLayer.png|border|thumb|center|650px|alt=Infrastructure Application Layer Diagram|Infrastructure Application Layer Diagram]]


== Helping Out ==
= Contributing =


If you are interested in making additions or changes to the diagrams on this page, the sources are located on our git server. You can learn more about accessing the files through the related [https://fedorahosted.org/fedora-infrastructure/wiki Trac project].
One can contribute to Fedora Infrastructure in several ways. If you are looking to improve the quality of content in this page then have a look at [[Infrastructure/GettingStarted|GettingStarted]]. And if you are wondering why no server in a great country like yours and want to make donations of hardware please visit our [[Donations|donations]] and [http://fedoraproject.org/sponsors sponsors] page.


[[Category:Infrastructure]]
[[Category:Infrastructure]]

Latest revision as of 08:57, 11 August 2020

Global Presence

Fedora Infrastructure Network spans multiple continents. Datacenters are present in North America, UK and Germany. List of Datacenters goes as follows:

  1. iad2 - main Datacenter in Ashburn, VA, USA
  2. phx2 - previous main Datacenter in Phoenix, AZ, USA
  3. rdu2 - Raleigh, NC, USA
  4. tummy - Colorado, USA
  5. osuosl - Oregon, USA
  6. bodhost - UK
  7. ibiblio - North Carolina, USA
  8. internetx - Germany
  9. colocation america - LA, USA
  10. dedicated solutions - USA
  11. host1plus - Germany

Network Topology

This section shows how our severs are interconnected or connected to the outside world.

Infrastructure Network Topology
Infrastructure Network Topology

Network Architecture

Following diagram shows overall network architecture. fedoraproject.org and admin.fedoraproject.org are round robin DNS entries. They are populated based on geoip information. For example, for North America they get a pool of servers in North America. Each of those servers in DNS is a proxy server. It accepts connections using Apache. Apache uses HAProxy as a backend, and in turn some(but not all) services use varnish for caching. Requests are replied to from cache if varnish has it cached, otherwise it sends into a backend application server. Many of these are in the main datacenter in iad2 and some are at other sites. The application server processes the request and sends it back.

Infrastructure Network Architecture
Infrastructure Network Architecture


Proxy View

This shows whats going on in the proxies. Incoming DNS balanced user application requests hits Apache httpd in proxy server. Apache forwards request to HAProxy, which load balances requests over the app servers. Some of them reaches over VPN. An example of external source is fedoraproject.org/people/ which is a proxy pass to people.fedoraproject.org hosted at Duke. In some cases there is also varnish between HAProxy and the app servers to help cache information. Local requests use standard alias in the apache configs.

Infrastructure Proxy Server Flow Chart
Infrastructure Proxy Server Flow Chart

Application Layer

This is a generic view of how our applications work. Each application may have its own design, but the premise is the same. Incoming requests are load-balanced from proxy server and reaches to appropriate service box. All application servers in the clustered services area must be identical. If an exception is made it must get moved to solo services box. Most solo services will be one-offs or proof of concept(test) services. Most commonly our single point of failure lie in the data layer.

Infrastructure Application Layer Diagram
Infrastructure Application Layer Diagram

Contributing

One can contribute to Fedora Infrastructure in several ways. If you are looking to improve the quality of content in this page then have a look at GettingStarted. And if you are wondering why no server in a great country like yours and want to make donations of hardware please visit our donations and sponsors page.