Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: lucab/docs
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: ups/torcx-using-custom-remotes
Choose a base ref
...
head repository: coreos/docs
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: master
Choose a head ref

Commits on Nov 9, 2018

  1. Update Digitalocean URLs

    bengadbois committed Nov 9, 2018
    Copy the full SHA
    b522185 View commit details
  2. os: drop fleet references

    Ignore community-supported platforms, since they're all pretty stale.
    bgilbert committed Nov 9, 2018
    Copy the full SHA
    ef8ba4c View commit details
  3. Merge pull request coreos#1263 from bengadbois/digitalocean-urls

    os: update DigitalOcean URLs
    bgilbert authored Nov 9, 2018
    Copy the full SHA
    2f1730d View commit details
  4. Merge pull request coreos#1264 from bgilbert/fleet

    os: drop fleet references
    bgilbert authored Nov 9, 2018
    Copy the full SHA
    1332536 View commit details

Commits on Nov 13, 2018

  1. os/sdk-modifying-coreos: drop architecture setup

    We no longer support multiple arches and setup_board no longer requires
    arguments.
    bgilbert committed Nov 13, 2018
    Copy the full SHA
    b095d35 View commit details

Commits on Nov 14, 2018

  1. Merge pull request coreos#1265 from bgilbert/arch

    os/sdk-modifying-coreos: drop architecture setup
    bgilbert authored Nov 14, 2018
    Copy the full SHA
    07e0a51 View commit details

Commits on Dec 5, 2018

  1. start var-vm-swapfile1.swap service

    verify with `free -htwl`
    smartbit authored Dec 5, 2018
    Copy the full SHA
    20c5a7b View commit details
  2. Update adding-swap.md

    `enabled`
    smartbit authored Dec 5, 2018
    Copy the full SHA
    46ac305 View commit details
  3. Merge pull request coreos#1267 from smartbit/patch-1

    doc: start var-vm-swapfile1.swap service
    bgilbert authored Dec 5, 2018
    Copy the full SHA
    eda010e View commit details

Commits on Dec 13, 2018

  1. Copy the full SHA
    59e8d80 View commit details
  2. Merge pull request coreos#1268 from h8liu/patch-1

    To be consistent with the raw git link on line 14
    bgilbert authored Dec 13, 2018
    Copy the full SHA
    8881918 View commit details

Commits on Mar 7, 2019

  1. Copy the full SHA
    fa42ae5 View commit details

Commits on Mar 11, 2019

  1. Merge pull request coreos#1271 from bgilbert/mantle

    os/sdk-modifying-coreos: bump mantle version to 0.12.0
    bgilbert authored Mar 11, 2019
    Copy the full SHA
    a44b8ad View commit details

Commits on Mar 21, 2019

  1. Copy the full SHA
    69d65e9 View commit details
  2. Merge pull request coreos#1272 from bgilbert/metadata

    ignition: link to coreos-metadata legacy-mode docs
    bgilbert authored Mar 21, 2019
    Copy the full SHA
    7ec4baa View commit details

Commits on May 15, 2019

  1. os: document disabling SMT

    bgilbert committed May 15, 2019
    Copy the full SHA
    6cea585 View commit details
  2. Merge pull request coreos#1273 from bgilbert/smt

    os: document disabling SMT
    bgilbert authored May 15, 2019
    Copy the full SHA
    ac83efa View commit details

Commits on Jul 9, 2019

  1. Copy the full SHA
    e441391 View commit details
  2. Merge pull request coreos#1276 from bgilbert/aws

    os/ec2: try to fix HTML in "test cluster" section
    bgilbert authored Jul 9, 2019
    Copy the full SHA
    d403f23 View commit details

Commits on Nov 20, 2019

  1. Copy the full SHA
    1856a8d View commit details
  2. Merge pull request coreos#1283 from bgilbert/taa

    os/disabling-smt: document TAA mitigation
    bgilbert authored Nov 20, 2019
    Copy the full SHA
    29d8bab View commit details
4 changes: 2 additions & 2 deletions ignition/metadata.md
Original file line number Diff line number Diff line change
@@ -6,7 +6,7 @@ Each of these examples is written in version 2.0.0 of the config. Ensure that an

## etcd2 with coreos-metadata

This config will write a systemd drop-in (shown below) for the etcd2.service. The drop-in modifies the ExecStart option, adding a few flags to etcd2's invocation. These flags use variables defined by coreos-metadata.service to change the interfaces on which etcd2 listens. coreos-metadata is provided by Container Linux and will read the appropriate metadata for the cloud environment (AWS in this example) and write the results to `/run/metadata/coreos`. For more information on the supported platforms and environment variables, refer to the [coreos-metadata README][metadata-readme].
This config will write a systemd drop-in (shown below) for the etcd2.service. The drop-in modifies the ExecStart option, adding a few flags to etcd2's invocation. These flags use variables defined by coreos-metadata.service to change the interfaces on which etcd2 listens. coreos-metadata is provided by Container Linux and will read the appropriate metadata for the cloud environment (AWS in this example) and write the results to `/run/metadata/coreos`. For more information on the supported platforms and environment variables, refer to the [coreos-metadata documentation][metadata-docs].

```json ignition-config
{
@@ -78,4 +78,4 @@ ExecStart=/usr/bin/bash -c 'echo "CUSTOM_EC2_IPV4_PUBLIC=$(curl\
```


[metadata-readme]: https://github.com/coreos/coreos-metadata/blob/master/README.md
[metadata-docs]: https://github.com/coreos/coreos-metadata/blob/master/docs/container-linux-legacy.md
1 change: 1 addition & 0 deletions os/adding-swap.md
Original file line number Diff line number Diff line change
@@ -91,6 +91,7 @@ storage:
systemd:
units:
- name: var-vm-swapfile1.swap
enabled: true
contents: |
[Unit]
Description=Turn on swap
10 changes: 4 additions & 6 deletions os/booting-on-digitalocean.md
Original file line number Diff line number Diff line change
@@ -111,8 +111,6 @@ To connect to a droplet after it's created, run:
ssh core@<ip address>
```

Optionally, you may want to [configure your ssh-agent](https://github.com/coreos/fleet/blob/master/Documentation/using-the-client.md#remote-fleet-access) to more easily run [fleet commands](../fleet/launching-containers-fleet.md).

## Launching droplets

### Via the API
@@ -158,10 +156,10 @@ curl --request POST "https://api.digitalocean.com/v2/droplets" \

For more details, check out [DigitalOcean's API documentation][do-api-docs].

[do-api-docs]: https://developers.digitalocean.com/#droplets
[do-keys-docs]: https://developers.digitalocean.com/#keys
[do-list-keys-docs]: https://developers.digitalocean.com/#list-all-keys
[do-token-settings]: https://cloud.digitalocean.com/settings/applications
[do-api-docs]: https://developers.digitalocean.com/documentation/v2/
[do-keys-docs]: https://developers.digitalocean.com/documentation/v2/#ssh-keys
[do-list-keys-docs]: https://developers.digitalocean.com/documentation/v2/#list-all-keys
[do-token-settings]: https://cloud.digitalocean.com/account/api/tokens

### Via the web console

98 changes: 48 additions & 50 deletions os/booting-on-ec2.md
Original file line number Diff line number Diff line change
@@ -194,8 +194,6 @@ To connect to an instance after it's created, run:
ssh core@<ip address>
```

Optionally, you may want to [configure your ssh-agent](https://github.com/coreos/fleet/blob/master/Documentation/using-the-client.md#remote-fleet-access) to more easily run [fleet commands](../fleet/launching-containers-fleet.md).

## Multiple clusters
If you would like to create multiple clusters you will need to change the "Stack Name". You can find the direct [template file on S3](https://s3.amazonaws.com/coreos.com/dist/aws/coreos-stable-hvm.template).

@@ -267,22 +265,22 @@ First we need to create a security group to allow Container Linux instances to c
</li>
<li>
Use <a href="provisioning.md">ct</a> to convert the following configuration into an Ignition config, and back in the EC2 dashboard, paste it into the "User Data" field.
```yaml container-linux-config:ec2
etcd:
# All options get passed as command line flags to etcd.
# Any information inside curly braces comes from the machine at boot time.
# multi_region and multi_cloud deployments need to use {PUBLIC_IPV4}
advertise_client_urls: "http://{PRIVATE_IPV4}:2379"
initial_advertise_peer_urls: "http://{PRIVATE_IPV4}:2380"
# listen on both the official ports and the legacy ports
# legacy ports can be omitted if your application doesn't depend on them
listen_client_urls: "http://0.0.0.0:2379"
listen_peer_urls: "http://{PRIVATE_IPV4}:2380"
# generate a new token for each unique cluster from https://discovery.etcd.io/new?size=3
# specify the initial size of your cluster with ?size=X
discovery: "https://discovery.etcd.io/<token>"
```
```yaml container-linux-config:ec2
etcd:
# All options get passed as command line flags to etcd.
# Any information inside curly braces comes from the machine at boot time.
# multi_region and multi_cloud deployments need to use {PUBLIC_IPV4}
advertise_client_urls: "http://{PRIVATE_IPV4}:2379"
initial_advertise_peer_urls: "http://{PRIVATE_IPV4}:2380"
# listen on both the official ports and the legacy ports
# legacy ports can be omitted if your application doesn't depend on them
listen_client_urls: "http://0.0.0.0:2379"
listen_peer_urls: "http://{PRIVATE_IPV4}:2380"
# generate a new token for each unique cluster from https://discovery.etcd.io/new?size=3
# specify the initial size of your cluster with ?size=X
discovery: "https://discovery.etcd.io/<token>"
```
<ul>
<li>Paste configuration into "User Data"</li>
<li>"Continue"</li>
@@ -341,22 +339,22 @@ First we need to create a security group to allow Container Linux instances to c
</li>
<li>
Use <a href="provisioning.md">ct</a> to convert the following configuration into an Ignition config, and back in the EC2 dashboard, paste it into the "User Data" field.
```yaml container-linux-config:ec2
etcd:
# All options get passed as command line flags to etcd.
# Any information inside curly braces comes from the machine at boot time.
# multi_region and multi_cloud deployments need to use {PUBLIC_IPV4}
advertise_client_urls: "http://{PRIVATE_IPV4}:2379"
initial_advertise_peer_urls: "http://{PRIVATE_IPV4}:2380"
# listen on both the official ports and the legacy ports
# legacy ports can be omitted if your application doesn't depend on them
listen_client_urls: "http://0.0.0.0:2379"
listen_peer_urls: "http://{PRIVATE_IPV4}:2380"
# generate a new token for each unique cluster from https://discovery.etcd.io/new?size=3
# specify the initial size of your cluster with ?size=X
discovery: "https://discovery.etcd.io/<token>"
```
```yaml container-linux-config:ec2
etcd:
# All options get passed as command line flags to etcd.
# Any information inside curly braces comes from the machine at boot time.
# multi_region and multi_cloud deployments need to use {PUBLIC_IPV4}
advertise_client_urls: "http://{PRIVATE_IPV4}:2379"
initial_advertise_peer_urls: "http://{PRIVATE_IPV4}:2380"
# listen on both the official ports and the legacy ports
# legacy ports can be omitted if your application doesn't depend on them
listen_client_urls: "http://0.0.0.0:2379"
listen_peer_urls: "http://{PRIVATE_IPV4}:2380"
# generate a new token for each unique cluster from https://discovery.etcd.io/new?size=3
# specify the initial size of your cluster with ?size=X
discovery: "https://discovery.etcd.io/<token>"
```
<ul>
<li>Paste configuration into "User Data"</li>
<li>"Continue"</li>
@@ -415,22 +413,22 @@ First we need to create a security group to allow Container Linux instances to c
</li>
<li>
Use <a href="provisioning.md">ct</a> to convert the following configuration into an Ignition config, and back in the EC2 dashboard, paste it into the "User Data" field.
```yaml container-linux-config:ec2
etcd:
# All options get passed as command line flags to etcd.
# Any information inside curly braces comes from the machine at boot time.
# multi_region and multi_cloud deployments need to use {PUBLIC_IPV4}
advertise_client_urls: "http://{PRIVATE_IPV4}:2379"
initial_advertise_peer_urls: "http://{PRIVATE_IPV4}:2380"
# listen on both the official ports and the legacy ports
# legacy ports can be omitted if your application doesn't depend on them
listen_client_urls: "http://0.0.0.0:2379"
listen_peer_urls: "http://{PRIVATE_IPV4}:2380"
# generate a new token for each unique cluster from https://discovery.etcd.io/new?size=3
# specify the initial size of your cluster with ?size=X
discovery: "https://discovery.etcd.io/<token>"
```
```yaml container-linux-config:ec2
etcd:
# All options get passed as command line flags to etcd.
# Any information inside curly braces comes from the machine at boot time.
# multi_region and multi_cloud deployments need to use {PUBLIC_IPV4}
advertise_client_urls: "http://{PRIVATE_IPV4}:2379"
initial_advertise_peer_urls: "http://{PRIVATE_IPV4}:2380"
# listen on both the official ports and the legacy ports
# legacy ports can be omitted if your application doesn't depend on them
listen_client_urls: "http://0.0.0.0:2379"
listen_peer_urls: "http://{PRIVATE_IPV4}:2380"
# generate a new token for each unique cluster from https://discovery.etcd.io/new?size=3
# specify the initial size of your cluster with ?size=X
discovery: "https://discovery.etcd.io/<token>"
```
<ul>
<li>Paste configuration into "User Data"</li>
<li>"Continue"</li>
42 changes: 2 additions & 40 deletions os/booting-on-ecs.md
Original file line number Diff line number Diff line change
@@ -84,45 +84,7 @@ The example above pulls the latest official Amazon ECS agent container from the
If you want to configure SSH keys in order to log in, mount disks or configure other options, see the [Container Linux Configs documentation][cl-configs].
For more information on using ECS, check out the [official Amazon documentation](http://aws.amazon.com/documentation/ecs/).
[cl-configs]: provisioning.md
[ignition-docs]: https://coreos.com/ignition/docs/latest
## Connect ECS to an existing cluster
Connecting an existing cluster to ECS is simple with [fleet](../fleet/launching-containers-fleet.md) &mdash; the agent can be run as a global unit. The unit looks similar to the example above:
#### amazon-ecs-agent.service
```ini
[Unit]
Description=Amazon ECS Agent
After=docker.service
Requires=docker.service
Requires=network-online.target
After=network-online.target

[Service]
Environment=ECS_CLUSTER=your_cluster_name
Environment=ECS_LOGLEVEL=warn
ExecStartPre=-/usr/bin/docker kill ecs-agent
ExecStartPre=-/usr/bin/docker rm ecs-agent
ExecStartPre=/usr/bin/docker pull amazon/amazon-ecs-agent
ExecStart=/usr/bin/docker run --name ecs-agent --env=ECS_CLUSTER=${ECS_CLUSTER} --env=ECS_LOGLEVEL=${ECS_LOGLEVEL} --publish=127.0.0.1:51678:51678 --volume=/var/run/docker.sock:/var/run/docker.sock amazon/amazon-ecs-agent
ExecStop=/usr/bin/docker stop ecs-agent

[X-Fleet]
Global=true
```

Be sure to change `ECS_CLUSTER` to the cluster name you've configured in the AWS console or leave it empty for the default.

To run this unit on each machine, all you have to do is submit it to the cluster:

```sh
$ fleetctl start amazon-ecs-agent.service
Triggered global unit amazon-ecs-agent.service start
```

You should see all of your machines show up in the ECS CLI output.

For more information on using ECS, check out the [official Amazon documentation](http://aws.amazon.com/documentation/ecs/).
2 changes: 1 addition & 1 deletion os/booting-on-virtualbox.md
Original file line number Diff line number Diff line change
@@ -73,7 +73,7 @@ For more information on customization that can be done with cloud-config, head o
You need a config-drive to configure at least one SSH key to access the virtual machine. If you are in hurry, you can create a basic config-drive with following steps:

```sh
wget https://raw.github.com/coreos/scripts/master/contrib/create-basic-configdrive
wget https://raw.githubusercontent.com/coreos/scripts/master/contrib/create-basic-configdrive
chmod +x create-basic-configdrive
./create-basic-configdrive -H my_vm01 -S ~/.ssh/id_rsa.pub
```
91 changes: 91 additions & 0 deletions os/disabling-smt.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
# Disabling SMT on CoreOS Container Linux

Recent Intel CPU vulnerabilities ([L1TF] and [MDS]) cannot be fully mitigated in software without disabling Simultaneous Multi-Threading. This can have a substantial performance impact and is only necessary for certain workloads, so for compatibility reasons, SMT is enabled by default.

In addition, the Intel [TAA] vulnerability cannot be fully mitigated without disabling either of SMT or the Transactional Synchronization Extensions (TSX). Disabling TSX generally has less performance impact, so is the preferred approach on systems that don't otherwise need to disable SMT. For compatibility reasons, TSX is enabled by default.

SMT and TSX should be disabled on affected Intel processors under the following circumstances:
1. A bare-metal host runs untrusted virtual machines, and [other arrangements][l1tf-mitigation] have not been made for mitigation.
2. A bare-metal host runs untrusted code outside a virtual machine.

SMT can be conditionally disabled by passing `mitigations=auto,nosmt` on the kernel command line. This will disable SMT only if required for mitigating a vulnerability. This approach has two caveats:
1. It does not protect against unknown vulnerabilities in SMT.
2. It allows future Container Linux updates to disable SMT if needed to mitigate new vulnerabilities.

Alternatively, SMT can be unconditionally disabled by passing `nosmt` on the kernel command line. This provides the most protection and avoids possible behavior changes on upgrades, at the cost of a potentially unnecessary reduction in performance.

TSX can be conditionally disabled on vulnerable CPUs by passing `tsx=auto` on the kernel command line, or unconditionally disabled by passing `tsx=off`. However, neither setting takes effect on systems affected by MDS, since MDS mitigation automatically protects against TAA as well.

For typical use cases, we recommend enabling the `mitigations=auto,nosmt` and `tsx=auto` command-line options.

[L1TF]: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html
[l1tf-mitigation]: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html#mitigation-selection-guide
[MDS]: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html
[TAA]: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html

## Configuring new machines

The following Container Linux Config performs two tasks:

1. Adds `mitigations=auto,nosmt tsx=auto` to the kernel command line. This affects the second and subsequent boots of the machine, but not the first boot.
2. On the first boot, disables SMT at runtime if the system has an Intel processor. This is sufficient to protect against currently-known SMT vulnerabilities until the system is rebooted. After reboot, SMT will be re-enabled if the processor is not actually vulnerable.

```yaml container-linux-config
# Add kernel command-line arguments to automatically disable SMT or TSX
# on CPUs where they are vulnerable. This will affect the second and
# subsequent boots of the machine, but not the first boot.
storage:
filesystems:
- name: OEM
mount:
device: /dev/disk/by-label/OEM
format: ext4
files:
- filesystem: OEM
path: /grub.cfg
append: true
mode: 0644
contents:
inline: |
# Disable SMT on CPUs affected by MDS or similar vulnerabilities.
# Disable TSX on CPUs affected by TAA but not by MDS.
set linux_append="$linux_append mitigations=auto,nosmt tsx=auto"
# On the first boot only, disable SMT at runtime if it is enabled and
# the system has an Intel CPU. L1TF, MDS, and TAA vulnerabilities are
# limited to Intel CPUs.
systemd:
units:
- name: disable-smt-firstboot.service
enabled: true
contents: |
[Unit]
Description=Disable SMT on first boot on Intel CPUs to mitigate MDS
DefaultDependencies=no
Before=sysinit.target shutdown.target
Conflicts=shutdown.target
ConditionFirstBoot=true
[Service]
Type=oneshot
ExecStart=/bin/bash -c 'active="$(cat /sys/devices/system/cpu/smt/active)" && if [[ "$active" != 0 ]] && grep -q "vendor_id.*GenuineIntel" /proc/cpuinfo; then echo "Disabling SMT." && echo off > /sys/devices/system/cpu/smt/control; fi'
[Install]
WantedBy=sysinit.target
```
## Configuring existing machines
To add `mitigations=auto,nosmt tsx=auto` to the kernel command line on an existing system, add the following line to `/usr/share/oem/grub.cfg`:

```
set linux_append="$linux_append mitigations=auto,nosmt tsx=auto"
```
For example, using SSH:
```sh
ssh core@node01 'sudo sh -c "echo \"set linux_append=\\\"\\\$linux_append mitigations=auto,nosmt tsx=auto\\\"\" >> /usr/share/oem/grub.cfg && systemctl reboot"'
```

If you use locksmith for reboot coordination, replace `systemctl reboot` with `locksmithctl send-need-reboot`.
2 changes: 1 addition & 1 deletion os/generate-self-signed-certificates.md
Original file line number Diff line number Diff line change
@@ -31,7 +31,7 @@ cfssl print-defaults csr > ca-csr.json

### Certificate types which are used inside Container Linux

* **client certificate** is used to authenticate client by server. For example `etcdctl`, `etcd proxy`, `fleetctl` or `docker` clients.
* **client certificate** is used to authenticate client by server. For example `etcdctl`, `etcd proxy`, or `docker` clients.
* **server certificate** is used by server and verified by client for server identity. For example `docker` server or `kube-apiserver`.
* **peer certificate** is used by etcd cluster members as they communicate with each other in both ways.

2 changes: 1 addition & 1 deletion os/getting-started-with-docker.md
Original file line number Diff line number Diff line change
@@ -64,7 +64,7 @@ When running Docker containers manually, the most important option is to run the
docker run -d coreos/apache [process]
```

After you are comfortable with the mechanics of running containers by hand, it's recommended to use [systemd units](getting-started-with-systemd.md) and/or [fleet](../fleet/launching-containers-fleet.md) to run your containers on a cluster of Container Linux machines.
After you are comfortable with the mechanics of running containers by hand, it's recommended to use [systemd units](getting-started-with-systemd.md) to run a container on a Container Linux machine.

Do not run containers with detached mode inside of systemd unit files. Detached mode prevents your init system, in our case systemd, from monitoring the process that owns the container because detached mode forks it into the background. To prevent this issue, just omit the `-d` flag if you aren't running something manually.

Loading