Jelenlegi hely

CentOS.org

Feliratkozás CentOS.org hírcsatorna csatornájára
Planet CentOS - http://planet.centos.org/
Frissítve: 1 nap 1 óra

CentOS Seven blog: New CentOS Atomic Release and Kubernetes System Containers Now Available

2017, augusztus 11 - 20:53

Last week, the CentOS Atomic SIG released an updated version of CentOS Atomic Host (7.1707), a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

The release, which came as part of the monthly CentOS release stream, was a modest one, including only a single glibc bugfix update. The next Atomic Host release will be based on the RHEL 7.4 source code and will include support for overlayfs container storage, among other enhancements.

Outside of the Atomic Host itself, the SIG has updated its Kubernetes container images to be usable as system containers. What's more, in addition to the Kubernetes 1.5.x-based containers that derive from RHEL, the Atomic SIG is now producing packages and containers that provide the current 1.7.x version of Kubernetes.

Containerized Master

The downstream release of CentOS Atomic Host ships without the kubernetes-master package built into the image. You can install the master kubernetes components (apiserver, scheduler, and controller-manager) as system containers, using the following commands:

# atomic install --system --system-package=no --name kube-apiserver registry.centos.org/centos/kubernetes-apiserver:latest # atomic install --system --system-package=no --name kube-scheduler registry.centos.org/centos/kubernetes-scheduler:latest # atomic install --system --system-package=no --name kube-controller-manager registry.centos.org/centos/kubernetes-controller-manager:latest Kubernetes 1.7.x

The CentOS Virt SIG is now producing Kubernetes 1.7.x rpms, available through this yum repo. The Atomic SIG is maintaining system containers based on these rpms that can be installed as as follows:

on your master # atomic install --system --system-package=no --name kube-apiserver registry.centos.org/centos/kubernetes-sig-apiserver:latest # atomic install --system --system-package=no --name kube-scheduler registry.centos.org/centos/kubernetes-sig-scheduler:latest # atomic install --system --system-package=no --name kube-controller-manager registry.centos.org/centos/kubernetes-sig-controller-manager:latest on your node(s) # atomic install --system --system-package=no --name kubelet registry.centos.org/centos/kubernetes-sig-kubelet:latest # atomic install --system --system-package=no --name kube-proxy registry.centos.org/centos/kubernetes-sig-proxy:latest

Both the 1.5.x and 1.7.x sets of containers have been tested with the kubernetes ansible scripts provided in the upstream contrib repository, and function as drop-in replacements for the installed rpms. If you prefer to run Kubernetes from installed rpms, you can layer the master components onto your Atomic Host image using rpm-ostree package layering with the command: atomic host install kubernetes-master.

The containers referenced in these systemd service files are built in and hosted from the CentOS Community Container Pipeline, based on Dockerfiles from the CentOS-Dockerfiles repository.

Download CentOS Atomic Host

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. For links to media, see the CentOS wiki.

Upgrading

If you're running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they're rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you'd like to work on testing images, help with packaging, documentation -- join us!

The SIG meets every two weeks on Tuesday at 04:00 UTC in #centos-devel, and on the alternating weeks, meets as part of the Project Atomic community meeting at 16:00 UTC on Monday in the #atomic channel. You'll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you'd like to discuss the direction of Project Atomic, its components, or have other questions.

Kategóriák: Informatika

CentOS Seven blog: CentOS Linux 7 (1708); based on RHEL 7.4 Source Code

2017, augusztus 4 - 13:04

Red Hat released Red Hat Enterprise Linux 7.4 on August 1st, 2017 (Info).  In the CentOS world, we call this type of release a 'Point Release', meaning that the major version of a distribution (in this case Red Hat Enterprise Linux 7) is getting a new point in time update set (in this case '.4').  In this specific case, there were about 700 Source packages that were updated.  On this release date, the CentOS Project team began building a point release of CentOS Linux 7, CentOS Linux 7 (1708), with this new source code from Red Hat.  Here is how we do it.

When there is a new release of RHEL 7 source code, the public release of this source code happens on the CentOS git server (git.centos.org).  We then use a published set of tools (tools) to build Source RPMs (info) from the released git source code and immediately start building the updated version of CentOS Linux.  We use a program called mock to build Binary RPM packages from the SRPMs.

At the time of this article (5:00am CDT on August 4th, 2017), we have completed building 574 of the approximately 700 SRPMs needed for our point release.  Note that this is the largest number of packages in an EL7 point release so far.

What's Next Continuous Release Repository

Once we complete the building of the 700 SRPMs in the point release, they will start our QA process.  Once the packages have gone through enough of the QA process to ensure they are built correctly, do the normal things, and link against the proper libraries (tutorial; linking), the first set of updates that we will we release will be from our Continuous Release repository.  These binary packages will be the same packages that we will use in the next full tree and they will be available as updates in the current CR repo in the current release.  If you have opted into CR repo (explained in the above link), then after we populate and announce the release of those packages, then a normal 'yum update' will upgrade you to the new packages.

Normally the only packages that we don't release into the CR repo are the new centos-release package and the new anaconda package, and any packages associated specifically with them.  Anaconda is the installer used to create the new ISOs for install and the centos-release package has information on the new full release.  These packages, excluded from CR, will be in the new installer tree and on the new install media of the full release.

Historically, the CR repo is released between 7 and 14 days after Red Hat source code release, so we would expect CR availability between the 8th and 15th of August, 2017 for this release.

Final Release and New Install Media, CentOS Linux 7 (1708)

After the CR Repo release, the CentOS team and the QA team will continue QA testing and we will create a compilation of the newly built and released CR packages and packages still relevant from the last release into a new repository for CentOS Linux 7 (1708). Once we have a new repo, we will create and test install media. The repository and install media will then be made available on mirror.centos.org.  It will be in a directory labeled 7.4.1708.

Historically, the final release becomes available 3 to 6 weeks after the release of the source code by Red Hat.  So, we would expect our full release to happen sometime between August 22nd and  September 12th, 2017.

As I mentioned earlier, this is the largest point release yet in terms of number of packages released in the EL7 cycle to date, so it may take a few days longer for each above cycle.  Also for building this set of packages we also need something new .. the Developer Tool Set (version 6) compiler .. for some packages.  This should not be a major issue as the Software Collections Special Interest Group (SCL SIG) has a working version of that tool set already released.  A big think you to that SIG, as they have saved me a huge amount of work and time for this release.

Kategóriák: Informatika

CentOS Seven blog: Updated CentOS Vagrant Images Available (v1707.01)

2017, augusztus 3 - 00:40

We are pleased to announce new official Vagrant images of CentOS Linux 6.9 and CentOS Linux 7.3.1611 for x86_64, featuring updated packages to 31 July 2017 and the following changes:

  • we are again using the same kickstarts for Hyper-V and the other hypervisors
  • you can now login on the serial console (useful if networking is broken)
Known Issues
  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile: config.vm.synced_folder ".", "/vagrant", type: "virtualbox"

    We recommend using NFS instead of VirtualBox shared folders if possible; you can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile, to prevent errors on "vagrant up".

  3. Vagrant 1.8.5 is unable to create new CentOS Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 broke private networking, see Vagrant bug #8166
  6. Vagrant 1.9.3 doesn't work with SMB sync due to Vagrant bug #8404
  7. The vagrant-libvirt plugin is only compatible with Vagrant 1.5 to 1.8
  8. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).
Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant 1.9.4 with vagrant-libvirt and VirtualBox 5.1.20 (without the Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

We also performed additional manual testing with Vagrant 1.9.6 on OS X 10.11.6, with VirtualBox 5.1.22.

Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

vagrant box add centos/6 # for CentOS Linux 6, or... vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

vagrant box update --box centos/6 vagrant box update --box centos/7 Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc $ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum aabcfe77a08b72bacbd6f05e5f26b67983b29314ee0039d0db4c9b28b4909fcd --provider libvirt --box-version 1705.01 centos/7

Unfortunately, vagrant box update doesn't accept a --checksum argument. Since there's no binary diffing involved in updating (the download size is the same, whether you have a previous version of the box or not), you can first issue vagrant box remove centos/7 and then download the box as described above.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to warmly thank Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure, as well as Patrick Lang from Microsoft for testing and feedback on the Hyper-V images.

We would also like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.
Kategóriák: Informatika

Fabian Arrotin: Using NFS for OpenStack (glance,nova) with selinux

2017, július 28 - 00:00

As announced already, I was (between other things) playing with Openstack/RDO and had deployed some small openstack setup in the CentOS Infra. Then I had to look at our existing DevCloud setup. This setup was based on Opennebula running on CentOS 6, and also using Gluster as backend for the VM store. That's when I found out that Gluster isn't a valid option anymore : Gluster is was deprecated and was now even removed from Cinder. Sad as one advantage of gluster is that you could (you had to ! ) user libgfapi so that qemu-kvm process could talk directly to gluster through ligbfapi and not accessing VM images over locally mounted gluster volumes (please, don't even try to do that, through fuse).

So what could be a replacement for Gluster from an openstack side ? I still have some dedicated nodes for storage backend[s], but not enough to even just think about Ceph. So it seems my only option was to consider NFS. (Technically speaking driver was removed from cinder, but I could have only tried to use it for glance and nova, as I have no need for cinder for DevCloud project, but clearly it would be dangerous for potential upgrades)

It's no that I'm a fan of storing qcow2 images on top of NFS, but it seems it was my only option, and at least the most transparent/less intrusive path, would I need to migrate to something else later. So let's test this before then using NFS through Infiniband (using IPoIB), and so at "good speed" (still have the infiniband hardware in place running for gluster, that will be replaced)

It's easy to mount the nfs exported dir under /var/lib/glance/images for glance, and then on every compute node also a nfs export under /var/lib/nova/instances/.

That's where you have to see what would be blocked by Selinux, as it seems the current policy shipped with openstack-selinux-0.8.6-0 (from Ocata) doesn't seem to allow that.

I initially tested services one and one and decided to open Pull Request for this, but in the mean time I rebuilt a custom selinux policy that seems to do the job in my rdo playground.

Here it is the .te that you can compile into usable .pp policy file :

module os-local-nfs 0.2; require { type glance_api_t; type virtlogd_t; type nfs_t; class file { append getattr open read write unlink create }; class dir { search getattr write remove_name create add_name }; } #============= glance_api_t ============== allow glance_api_t nfs_t:dir { search getattr write remove_name create add_name }; allow glance_api_t nfs_t:file { write getattr unlink open create read}; #============= virtlogd_t ============== allow virtlogd_t nfs_t:dir search; allow virtlogd_t nfs_t:file { append getattr open };

Of course you also need to enable some booleans. Some are already loaded by openstack-selinux (and you can see that from the enabled booleans by looking at /etc/selinux/targeted/active/booleans.local) but you also now need virt_use_nfs=1

Now that it works, I can replay that (all that coming from puppet) on the DevCloud nodes

Kategóriák: Informatika

CentOS Seven blog: No! Don’t turn off SELinux!

2017, július 27 - 16:24

One of the daily activities of the CentOS Community Lead is searching the Internet looking for new and interesting content about CentOS that we can share on the @CentOSProject Twitter account, or Facebook, Google +, or Reddit. There's quite a bit of content out there, too, since CentOS is very popular.

Unfortunately, some of the content gets unshared, based on one simple text search:

"SELinux AND disable"

That setting is indicative of one thing: the author is advocating the deactivation of SELinux, one of the most important security tools any Linux user can have. When that step is outlined, we have to pass sharing it and even recommend readers ignore such advice completely.

What is SELinux?

But why do articles feel the need to outright deactivate SELinux rather than help readers work through any problems they might have? Is SELinux that hard?

Actually, it's really not.

According to Thomas Cameron, Chief Architect for Red Hat, SELinux is a form of mandatory access control. In the past, UNIX and Linux systems have used discretionary access control, where a user will own a file, the user's group will own the file, and everyone else is considered to be other. Users have the discretion to set permissions on their own files, and Linux will not stop them, even if the new permissions might be less than secure, such as setting chmod 777 to your home directory.

"[Linux] will absolutely give you a gun, and you know where your foot is," Cameron said back in 2015 at Red Hat Summit. The situation gets even more dangerous when a user has root permissions, but that is the nature of discretionary access control.

With a mandatory access control system like SELinux in place, policies can be set and implemented by administrators that can typically prevent even the most reckless user from giving away the keys to the store. These policies are also fixed so not even root access can change it. In the example above, if a user had implemented chmod 777 on their home directory, there should be a policy in place within SELinux to prevent other users or processes from accessing that home directory.

Policies can be super fine-grained, setting access rules for anything from users, files, memory, sockets, and ports.

In distros like CentOS, there are typically two kinds of policies.

  • Targeted. Targeted processes are protected by SELinux, and everything else is unconfined.
  • MLS. Multi-level/multi-category security policies that are complex and often overkill for most organizations.

Targeted SELinux is the operational level most SELinux users are going to work with. There are two important concepts to keep in mind when working with SELinux, Cameron emphasized.

The first is labeling, where files, processes, ports, etc. are all labeled with an SELinux context. For files and directories, these labels are handled as extended attributes within the filesystem itself. For processes, ports, and the rest, labels are managed by the Linux kernel.

Within the SELinux label is a type category (along with SELinux user, role, and level categories). Those latter aspects of the label are really only useful for complex MLS policies. But for targeted policies, type enforcement is key. A process that is running a given context -- say, httpd_t --should be allowed to interact with a file that has an httpd_config_t label, for example.

Together, labeling and type enforcement form the core functionality of SELinux. This simplification of SELinux, and the wealth of useful tools in the SELinux ecosystem have made SELinux a lot more easy to manage than the old days.

So why is that when SELinux throws an error, so many tutorials and recommendations simply tell you to turn off SELinux enforcement? For Cameron, that's analogous to turning your car's radio up really loud when you hear it making a weird noise.

Instead of turning SELinux off and thus leaving your CentOS system vulnerable to any number of problems, try checking the common problems that come up when working with SELinux. These problems typically include:

  •  Mislabeling. This is the most common type of SELinux error, where something has the wrong label and it needs fixed.
  • Policy Modification. If SELinux has set a certain policy by default, based on use cases over time, you may have a specific need to change that policy slightly.
  • Policy Bugs. An outright mistake in the policy.
  • An Actual Attack. If this is the case, 'setenforce=0' would seem a very bad idea.
Don't Turn It Off!

If someone tells you not to run SELinux, this is not based on any reason other than supposed convenience or misinformation about SELinux.

The "convenience" argument would seem to be moot, given that a little investigation of SELinux errors using tools like `sealert` reveal verbose and detailed messages on what the problem is and exactly what commands are needed to get the problem solved.

Indeed, Cameron recommends that instead turning off SELinux altogether, run the process with SELinux in permissive mode temporarily and when policy violations (known as AVC denials) show up in the SELinux logs, you can either fix the boolean settings within existing policies to allow the new process to run without error. Or, if needed, build new policy modules on a test machine, move them to production machines and use `semodule - i` to install them, and set booleans based on what is learned on the test machines.

This is not 2010 anymore; SELinux on CentOS is not difficult to untangle and does not have to be pushed aside in favor of convenience anymore.

You can read more about SELinux in the CentOS wiki. Or see the SELinux coloring book, for a gentler introduction to what it is and how it works.

Kategóriák: Informatika

CentOS Seven blog: CentOS Atomic Host 7.1706 Released

2017, július 19 - 01:38

An updated version of CentOS Atomic Host (tree version 7.1706), is now available. CentOS Atomic Host is a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at cloud.centos.org. The backing ostree repo is published to mirror.centos.org.

CentOS Atomic Host includes these core component versions:

  • atomic-1.17.2-9.git2760e30.el7.x86_64
  • cloud-init-0.7.5-10.el7.centos.1.x86_64
  • docker-1.12.6-32.git88a4867.el7.centos.x86_64
  • etcd-3.1.9-1.el7.x86_64
  • flannel-0.7.1-1.el7.x86_64
  • kernel-3.10.0-514.26.2.el7.x86_64
  • kubernetes-node-1.5.2-0.7.git269f928.el7.x86_64
  • ostree-2017.5-3.el7.x86_64
  • rpm-ostree-client-2017.5-1.atomic.el7.x86_64
Containerized kubernetes-master

The downstream release of CentOS Atomic Host ships without the kubernetes-master package built into the image. Instead, you can run the master kubernetes components (apiserver, scheduler, and controller-manager) in containers, managed via systemd, using the service files and instructions on the CentOS wiki. The containers referenced in these systemd service files are built in and hosted from the CentOS Community Container Pipeline, based on Dockerfiles from the CentOS-Dockerfiles repository.

These containers have been tested with the kubernetes ansible scripts provided in the upstream contrib repository, and they work as expected, provided you first copy the service files onto your master.

Alternatively, you can install the kubernetes-master components using rpm-ostree package layering using the command: atomic host install kubernetes-master.

Upgrading

If you're running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade Images Vagrant

CentOS-Atomic-Host-7-Vagrant-Libvirt.box and CentOS-Atomic-Host-7-Vagrant-Virtualbox.box are Vagrant boxes for Libvirt and Virtualbox providers.

The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:

$ vagrant init centos/atomic-host && vagrant up --provider virtualbox ISO

The installer ISO can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This image allows users to control the install using kickstarts and to define custom storage, networking and user accounts. This is the recommended option for getting CentOS Atomic Host onto bare metal machines, or for generating your own image sets for custom environments.

QCOW2

The CentOS-Atomic-Host-7-GenericCloud.qcow2 image is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can create your own NoCloud iso image.

Amazon Machine Images Region Image ID us-east-1 ami-70e8fd66 ap-south-1 ami-c0c4bdaf eu-west-2 ami-dba8bebf eu-west-1 ami-42b6593b ap-northeast-2 ami-7b5e8015 ap-northeast-1 ami-597a9e3f sa-east-1 ami-95aedaf9 ca-central-1 ami-473e8123 ap-southeast-1 ami-93b425f0 ap-southeast-2 ami-e1332f82 eu-central-1 ami-e95ffd86 us-east-2 ami-1690b173 us-west-1 ami-189fb178 us-west-2 ami-a52a34dc SHA Sums f854d6ea3fd63b887d644b1a5642607450826bbb19a5e5863b673936790fb4a4 CentOS-Atomic-Host-7.1706-GenericCloud.qcow2 9e35d7933f5f36f9615dccdde1469fcbf75d00a77b327bdeee3dbcd9fe2dd7ac CentOS-Atomic-Host-7.1706-GenericCloud.qcow2.gz 836a27ff7f459089796ccd6cf02fcafd0d205935128acbb8f71fb87f4edb6f6e CentOS-Atomic-Host-7.1706-GenericCloud.qcow2.xz e15dded673f21e094ecc13d498bf9d3f8cf8653282cd1c83e5d163ce47bc5c4f CentOS-Atomic-Host-7.1706-Installer.iso 5266a753fa12c957751b5abba68e6145711c73663905cdb30a81cd82bb906457 CentOS-Atomic-Host-7.1706-Vagrant-Libvirt.box b85c51420de9099f8e1e93f033572f28efbd88edd9d0823c1b9bafa4216210fd CentOS-Atomic-Host-7.1706-Vagrant-VirtualBox.box Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they're rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you'd like to work on testing images, help with packaging, documentation -- join us!

The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you'll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you'd like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

Kategóriák: Informatika

CentOS Seven blog: Updated CentOS Vagrant Images Available (v1706.01)

2017, július 4 - 16:22

2017-07-05: We have released version 1706.02 of centos/7, providing RHBA-2017:1674-1, which fixes a regression introduced by the patch for the "stack clash" vulnerability. Existing boxes don't need to be destroyed and recreated: running sudo yum update inside the box will upgrade the kernel if needed. There is no patch for CentOS Linux 6 at this time, but we plan to provide an updated centos/6 image if such a patch is later released.

We are pleased to announce new official Vagrant images of CentOS Linux 6.9 and CentOS Linux 7.3.1611 for x86_64, featuring updated packages to 2 Juli 2017. This release also includes an updated kernel with important security fixes.

Known Issues
  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile: config.vm.synced_folder ".", "/vagrant", type: "virtualbox"

    We recommend using NFS instead of VirtualBox shared folders if possible; you can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile, to prevent errors on "vagrant up".

  3. Vagrant 1.8.5 is unable to create new CentOS Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 broke private networking, see Vagrant bug #8166
  6. Vagrant 1.9.3 doesn't work with SMB sync due to Vagrant bug #8404
  7. The vagrant-libvirt plugin is only compatible with Vagrant 1.5 to 1.8
  8. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).
Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant 1.9.4 with vagrant-libvirt and VirtualBox 5.1.20 (without the Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

We also performed additional manual testing with Vagrant 1.9.6 on OS X 10.11.6, with VirtualBox 5.1.22.

Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

vagrant box add centos/6 # for CentOS Linux 6, or... vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

vagrant box update --box centos/6 vagrant box update --box centos/7 Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc $ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum aabcfe77a08b72bacbd6f05e5f26b67983b29314ee0039d0db4c9b28b4909fcd --provider libvirt --box-version 1705.01 centos/7

Unfortunately, vagrant box update doesn't accept a --checksum argument. Since there's no binary diffing involved in updating (the download size is the same, whether you have a previous version of the box or not), you can first issue vagrant box remove centos/7 and then download the box as described above.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to warmly thank Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure, as well as Patrick Lang from Microsoft for testing and feedback on the Hyper-V images.

We would also like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.
Kategóriák: Informatika

CentOS Seven blog: HPC Student Cluster Competition, Frankfurt, Germany

2017, június 27 - 17:51

Last week, as I mentioned in my earlier post, I was in Frankfurt, Germany, for the ISC High Performance Computing conference. The thing that grabbed my attention, more than anything else, was the Student Cluster Competition.  11 teams from around the world - mostly from Universities - were competing to create the fastest (by a variety of measures) student supercomputer. These students have progressed from earlier regional competitions, and are the world's finest young HPC experts. Just being there was an amazing accomplishment. And these young people were obviously thrilled to be there.

Each team had hardware that had been sponsored by major HPC vendors. I talked with several of the teams about this. The UPC Thunderchip team, from Barcelona Tech, (Winner of the Fan Favorite award!) said that their hardware, for example, had been donated by (among other vendors) CoolIT systems, who had donated the liquid cooling system that sat atop their rack.

(When I was in college, we had a retired 3B2 that someone had dumpster-dived for us, but I'm not bitter.)

Over the course of the week, these teams were given a variety of data challenges. Some of them, they knew ahead of time and had optimized for. Others were surprise challenges, which they had to optimize for on the fly.

While the jobs were running, the students roamed the show floor, talking with vendors, and, I'm sure, making contacts that will be beneficial in their future careers.

Now, granted, I had a bit of a ulterior motive. I was trying to find out the role that CentOS plays in this space. And, as I mentioned in my earlier post, 8 of the 11 teams were running CentOS. (One - University of Hamburg - was running Fedora. Two - NorthEast/Purdue, and Barcelona Tech - were running Ubuntu) And teams that placed first, second, and third in the competition - (First place: Tsinghua University, Beijing. Second place: Centre for High Performance Computing South Africa. Third place: Beihang University, Beijing.) - were also running CentOS. And many of the research organizations I talked to were also running CentOS on their HPC clusters.

I ended up doing interviews with just two of the teams, about their hardware, and what tests that they had to complete on them to win the contest. You can see those on my YouTube channel, HERE and HERE.

At the end, while just three teams walked away with the trophies, all of these students had an amazing opportunity. I was so impressed with their professionalism, as well as their brilliance.

And good luck to the teams who have been invited to the upcoming competition in Denver. I hope I'll be able to observe that one, too!

Kategóriák: Informatika

CentOS Seven blog: New CentOS Atomic Host Release Available for Download

2017, június 19 - 18:31

An updated version of CentOS Atomic Host (tree version 7.1705), is now available. CentOS Atomic Host is a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at cloud.centos.org. The backing ostree repo is published to mirror.centos.org.

CentOS Atomic Host includes these core component versions:

  • atomic-1.17.2-4.git2760e30.el7.x86_64
  • cloud-init-0.7.5-10.el7.centos.1.x86_64
  • docker-1.12.6-28.git1398f24.el7.centos.x86_64
  • etcd-3.1.7-1.el7.x86_64
  • flannel-0.7.0-1.el7.x86_64
  • kernel-3.10.0-514.21.1.el7.x86_64
  • kubernetes-node-1.5.2-0.6.gitd33fd89.el7.x86_64
  • ostree-2017.5-1.el7.x86_64
  • rpm-ostree-client-2017.5-1.atomic.el7.x86_64
Containerized kubernetes-master

The downstream release of CentOS Atomic Host ships without the kubernetes-master package built into the image. Instead, you can run the master kubernetes components (apiserver, scheduler, and controller-manager) in containers, managed via systemd, using the service files and instructions on the CentOS wiki. The containers referenced in these systemd service files are built in and hosted from the CentOS Community Container Pipeline, based on Dockerfiles from the CentOS-Dockerfiles repository.

These containers have been tested with the kubernetes ansible scripts provided in the upstream contrib repository, and they work as expected, provided you first copy the service files onto your master.

Alternatively, you can install the kubernetes-master components using rpm-ostree package layering using the command: atomic host install kubernetes-master.

Upgrading

If you're running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade Images Vagrant

CentOS-Atomic-Host-7-Vagrant-Libvirt.box and CentOS-Atomic-Host-7-Vagrant-Virtualbox.box are Vagrant boxes for Libvirt and Virtualbox providers.

The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:

$ vagrant init centos/atomic-host && vagrant up --provider virtualbox ISO

The installer ISO can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This image allows users to control the install using kickstarts and to define custom storage, networking and user accounts. This is the recommended option for getting CentOS Atomic Host onto bare metal machines, or for generating your own image sets for custom environments.

QCOW2

The CentOS-Atomic-Host-7-GenericCloud.qcow2 image is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can create your own NoCloud iso image.

Amazon Machine Images Region Image ID us-east-1 ami-90e7c686 ap-south-1 ami-2ba1de44 eu-west-2 ami-2b44524f eu-west-1 ami-a6c5dbc0 ap-northeast-2 ami-87ba65e9 ap-northeast-1 ami-64868e03 sa-east-1 ami-f9197295 ca-central-1 ami-66e55a02 ap-southeast-1 ami-850a88e6 ap-southeast-2 ami-caf7e6a9 eu-central-1 ami-42e84f2d us-east-2 ami-89bb9dec us-west-1 ami-245b7644 us-west-2 ami-68818a11 SHA Sums 46082267562f9eefbc4b29058696108f07cb0ceb2dafe601ec5d66bb1037120a CentOS-Atomic-Host-7.1705-GenericCloud.qcow2 1bd6dfbe360be599a45734f03b34e08cc903630327e1c54534eb4218bf18d0da CentOS-Atomic-Host-7.1705-GenericCloud.qcow2.gz aa4a3ac057d3ea898bea07052aa9fd20c7ca7ea3c8a5474bca9227622915e5e2 CentOS-Atomic-Host-7.1705-GenericCloud.qcow2.xz d99c2e9d1d31907ace3c1e54fc3087ebb9d627ca46168f7e65b469789cd39739 CentOS-Atomic-Host-7.1705-Installer.iso cfd6a29e26e202476b6a2dc54b1b31321688270a6cc6a9ef4206ac0ebd0e309c CentOS-Atomic-Host-7.1705-Vagrant-Libvirt.box d9b5f965f637a909efa15cd43063729db002c012c81a8cde0de588ea0932f970 CentOS-Atomic-Host-7.1705-Vagrant-VirtualBox.box Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they're rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you'd like to work on testing images, help with packaging, documentation -- join us!

The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you'll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you'd like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

 

Kategóriák: Informatika

CentOS Seven blog: Introducing official CentOS images for Vagrant on Hyper-V

2017, június 16 - 16:26

Starting with v1705, we are starting to offer the official CentOS images for Vagrant, for the Hyper-V provider. Hyper-V is Microsoft's native hypervisor, available by default in Windows Server 2008 and newer, Windows 8 Pro, Windows 10 Pro, and Windows Hyper-V Server 2012; since it also powers Microsoft Azure, it's probably a good alternative to VirtualBox for people using Vagrant professionally on Windows hosts. We now support Hyper-V, libvirt (KVM), VirtualBox and VMware as Vagrant providers.

The build scripts are in the vagrant directory of our GitHub repo, https://github.com/CentOS/sig-cloud-instance-build. Pull requests are welcome; please only file a new issue to report actual bugs, not support requests (please use the mailing lists or IRC for that).

Just like with other proprietary hypervisors like VMware, we have no possibility to automatically test the images for Hyper-V on our Continuous Integration infrastructure. We welcome any feedback from Hyper-V users -- please let us know if you think you would be able test our monthly releases on a regular basis (we normally release at the beginning of each month). You can contact us on the centos-devel mailing list or via IRC in #centos-devel on Freenode.

We have been working on providing images for Hyper-V at least since October 2016. It took much longer than I anticipated, due to the necessity of upgrading our build infrastructure and fix several problems with the image configuration. In this context, I would like to warmly thank the following people:

  • Patrick Lang from Microsoft, for helping with testing and debugging the experimental Hyper-V images;
  • Fabian Arrotin and Thomas Oulevey from the CentOS Project, for their infrastructure work;
  • Micheal Vermaes, for helping test the experimental images;
  • Karanbir Singh, the CentOS Project lead, for his constant support and assistence.
Kategóriák: Informatika

CentOS Seven blog: Updated CentOS Vagrant Images Available (v1705.01)

2017, június 16 - 16:26

2017-06-21: We have released version 1705.02 of the official CentOS Linux images for Vagrant, providing important security fixes in glibc and the Linux kernel, addressing the "stack clash" vulnerability. We advise all users to update to this new release. Existing boxes don't need to be destroyed and recreated: running sudo yum update inside the box will upgrade the packages as needed (a reboot of the box is required after the update).

We are pleased to announce new official Vagrant images of CentOS Linux 6.9 and CentOS Linux 7.3.1611 for x86_64, featuring updated packages to 1 June 2017. Starting with this release, we also provide official images for Hyper-V. Hyper-V is Microsoft's native hypervisor, included by default

Known Issues
  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile: config.vm.synced_folder ".", "/vagrant", type: "virtualbox"

    We recommend using NFS instead of VirtualBox shared folders if possible; you can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile, to prevent errors on "vagrant up".

  3. Vagrant 1.8.5 is unable to create new CentOS Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 broke private networking, see Vagrant bug #8166
  6. Vagrant 1.9.3 doesn't work with SMB sync due to Vagrant bug #8404
  7. The vagrant-libvirt plugin is only compatible with Vagrant 1.5 to 1.8
  8. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).
Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant 1.9.3 with vagrant-libvirt and VirtualBox 5.1.20 (without the Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

We also performed additional manual testing with Vagrant 1.9.4 on OS X 10.11.6, with VirtualBox 5.1.20.

Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

vagrant box add centos/6 # for CentOS Linux 6, or... vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

vagrant box update --box centos/6 vagrant box update --box centos/7 Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc $ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum aabcfe77a08b72bacbd6f05e5f26b67983b29314ee0039d0db4c9b28b4909fcd --provider libvirt --box-version 1705.01 centos/7

Unfortunately, this is not possible with vagrant box update.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to warmly thank Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure, as well as Patrick Lang from Microsoft for testing and feedback on the Hyper-V images.

We would also like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.
Kategóriák: Informatika

CentOS Seven blog: New CentOS Atomic Host with Optional Docker 1.13

2017, május 16 - 19:55

An updated version of CentOS Atomic Host (tree version 7.20170428), is now available, featuring the option of substituting the host’s default docker 1.12 container engine with a more recent, docker 1.13-based version, provided via the docker-latest package.

CentOS Atomic Host is a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at cloud.centos.org. The backing ostree repo is published to mirror.centos.org.

CentOS Atomic Host includes these core component versions:

  • atomic-1.15.4-2.el7.x86_64
  • cloud-init-0.7.5-10.el7.centos.1.x86_64
  • docker-1.12.6-16.el7.centos.x86_64
  • etcd-3.1.3-1.el7.x86_64
  • flannel-0.7.0-1.el7.x86_64
  • kernel-3.10.0-514.16.1.el7.x86_64
  • kubernetes-node-1.5.2-0.5.gita552679.el7.x86_64
  • ostree-2017.3-2.el7.x86_64
  • rpm-ostree-client-2017.3-1.atomic.el7.x86_64
Containerized kubernetes-master

The downstream release of CentOS Atomic Host ships without the kubernetes-master package built into the image. Instead, you can run the master kubernetes components (apiserver, scheduler, and controller-manager) in containers, managed via systemd, using the service files and instructions on the CentOS wiki. The containers referenced in these systemd service files are built in and hosted from the CentOS Community Container Pipeline, based on Dockerfiles from the CentOS-Dockerfiles repository.

These containers have been tested with the kubernetes ansible scripts provided in the upstream contrib repository, and they work as expected, provided you first copy the service files onto your master.

Alternatively, you can install the kubernetes-master components using rpm-ostree package layering using the command: atomic host install kubernetes-master.

docker-latest

You can switch to the alternate docker version by running:

# systemctl disable docker --now # systemctl enable docker-latest --now # sed -i '/DOCKERBINARY/s/^#//g' /etc/sysconfig/docker

Because both docker services share the /run/docker directory, you cannot run both docker and docker-latest at the same time on the same system.

Upgrading

If you're running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:

$ sudo atomic host upgrade Images Vagrant

CentOS-Atomic-Host-7-Vagrant-Libvirt.box and CentOS-Atomic-Host-7-Vagrant-Virtualbox.box are Vagrant boxes for Libvirt and Virtualbox providers.

The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see https://atlas.hashicorp.com/centos/boxes/atomic-host). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:

$ vagrant init centos/atomic-host && vagrant up --provider virtualbox ISO

The installer ISO can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This image allows users to control the install using kickstarts and to define custom storage, networking and user accounts. This is the recommended option for getting CentOS Atomic Host onto bare metal machines, or for generating your own image sets for custom environments.

QCOW2

The CentOS-Atomic-Host-7-GenericCloud.qcow2 image is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can create your own NoCloud iso image.

Amazon Machine Images Region Image ID ap-south-1 ami-9c7b06f3 eu-west-2 ami-14425570 eu-west-1 ami-a1b9b7c7 ap-northeast-2 ami-e01cc18e ap-northeast-1 ami-2a0d304d sa-east-1 ami-ce7619a2 ca-central-1 ami-8b813def ap-southeast-1 ami-61e36702 ap-southeast-2 ami-84c7cde7 eu-central-1 ami-f970ae96 us-east-1 ami-4a70015c us-east-2 ami-d2cfe8b7 us-west-1 ami-57ba9c37 us-west-2 ami-fbd8bd9b SHA Sums 977c9b6e70dd0170fc092520f01be26c4d256ffe5340928d79c762850e5cedd9 CentOS-Atomic-Host-7.1704-GenericCloud.qcow2 781074c43aa6a6f3cad61a77108541976776eb3cb6fe30f54ca746a8314b5f87 CentOS-Atomic-Host-7.1704-GenericCloud.qcow2.gz aef7fedf01b920ee75449467eb93724405cb22d861311fbc42406a7bd4dbfee2 CentOS-Atomic-Host-7.1704-GenericCloud.qcow2.xz 669c5fd1b97bc2849a7e3dbec325207d98e834ce71e17e0921b583820d91f4f5 CentOS-Atomic-Host-7.1704-Installer.iso b5ef69bff65ab595992649f62c8fc67c61faa59ba7f4ff0cb455a9196e450ae2 CentOS-Atomic-Host-7.1704-Vagrant-Libvirt.box 73757f50ef9cdac2e3ba6d88a216cca23000a32fa96891902feaa86d49147e3f CentOS-Atomic-Host-7.1704-Vagrant-VirtualBox.box Release Cycle

The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they're rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.

Getting Involved

CentOS Atomic Host is produced by the CentOS Atomic SIG, based on upstream work from Project Atomic. If you'd like to work on testing images, help with packaging, documentation -- join us!

The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you'll often find us in #atomic and/or #centos-devel if you have questions. You can also join the atomic-devel mailing list if you'd like to discuss the direction of Project Atomic, its components, or have other questions.

Getting Help

If you run into any problems with the images or components, feel free to ask on the centos-devel mailing list.

Have questions about using Atomic? See the atomic mailing list or find us in the #atomic channel on Freenode.

Kategóriák: Informatika

Fabian Arrotin: Linking Foreman with Zabbix through MQTT

2017, május 16 - 00:00

It's been a while since I thought about this design, but I finally had time to implement it the proper way, and "just in time" as I needed recently to migrate our Foreman instance to another host (from CentOS 6 to CentOS 7)

Within the CentOS Infra, we use Foreman as an ENC for our Puppet environments (multiple ones). For full automation between configuration management and monitoring, you need some "glue". The idea is that whatever you describe at the configuration management level should be authoritative and so automatically configuring the monitoring solution you have in place in your Infra.

In our case, that means that we have Foreman/puppet on one side, and Zabbix on the other side. Let's see how we can "link" the two sides.

What I've seen so far is that you use exported resources on each node, store that in another PuppetDB, and then on the monitoring node, reapply all those resources. Problem with such solution is that it's "expensive" and when one thinks about it, a little bit strange to export the "knowledge" from Foreman back into another DB, and then let puppet compiles a huge catalog at the monitoring side, even if nothing was changed.

One issue is also that in our Zabbix setup, we also have some nodes that aren't really managed by Foreman/puppet (but other automation around Ansible, so I had to use an intermediate step that other tools can also use/abuse for the same reason.

The other reason also is that I admit that I'm a fan of "event driven" configuration change, so my idea was :

  • update a host in Foreman (or groups of hosts, etc)
  • publish that change on a secure network through a message queue (so asynchronous so that it doesn't slow down the foreman update operation itself)
  • let Zabbix server know that change and apply it (like linking a template to a host)

So the good news is that it can be done really easily with several components :

Here is a small overview of the process :

Foreman hooks

Setting up foreman hooks is really easy: just install the pkg itself (tfm-rubygem-foreman_hooks.noarch), read the Documentation, and then create your scripts. There are some examples for Bash and python in the examples directory, but basically you just need to place some scripts at specific place[s]. In my case I wanted to "trigger" an event in the case of a node update (like adding a puppet class, or variable/paramater change) so I just had to place it under /usr/share/foreman/config/hooks/host/managed/update/.

One little remark though : if you put a new file, don't forget to restart foreman itself, so that it picks that hooks file, otherwise it would still be ignored and so not ran.

Mosquitto

Mosquitto itself is available in your favorite rpm repo, so installing it is a breeze. Reason why I selected mosquitto is that it's very lightweight (package size is under 200Kb), it supports TLS and ACL out-of-the box.

For an introduction to MQTT/Mosquitto, I'd suggest you to read Jan-Piet Mens dedicated blog post around it I even admit that I discovered it by attending one of his talks on the topic, back in the Loadays.org days :-)

Zabbix-cli

While one can always discuss "Raw API" with Zabbix, I found it useful to use a tool I was already using for various tasks around Zabbix : zabbix-cli For people interested in using it on CentOS 6 or 7, I built the packages and they are on CBS

So I plumbed it in a systemd unit file that subscribe to specific MQTT topic, parse the needed informations (like hostname and zabbix templates to link, unlink, etc) and then it updates that in Zabbix itself (from the log output):

[+] 20170516-11:43 : Adding zabbix template "Template CentOS - https SSL Cert Check External" to host "dev-registry.lon1.centos.org" [Done]: Templates Template CentOS - https SSL Cert Check External ({"templateid":"10105"}) linked to these hosts: dev-registry.lon1.centos.org ({"hostid":"10174"})

Cool, so now I don't have to worry about forgetting to tie a zabbix template to a host , as it's now done automatically. No need to say that the deployment of those tools was of course automated and coming from Puppet/foreman :-)

Kategóriák: Informatika

Fabian Arrotin: Deploying Openstack through puppet on CentOS 7 - a Journey

2017, május 8 - 00:00

It's not a secret that I was playing/experimenting with OpenStack in the last days. When I mention OpenStack, I should even say RDO , as it's RPM packaged, built and tested on CentOS infra.

Now that it's time to deploy it in Production, that's when you should have a deeper look at how to proceed and which tool to use. Sure, Packstack can help you setting up a quick PoC but after some discussions with people hanging around in the #rdo irc channel on freenode, it seems that almost everybody agreed on the fact that it's not the kind of tool you want to use for a proper deploy.

Let's so have a look at the available options. While I really like/prefer Ansible, we (CentOS Project) still use puppet as our Configuration Management tool, and itself using Foreman as the ENC. So let's see both options.

  • Ansible : Lot of natives modules exist to manage an existing/already deployed openstack cloud, but nothing really that can help setting up one from scratch. OTOH it's true that Openstack Ansible exists, but that will setup openstack components into LXC containers, and wasn't really comfortable with the whole idea (YMMV)
  • Puppet : Lot of puppet modules so you can automatically reuse/import those into your existing puppet setup, and seems to be the prefered method when discussing with people in #rdo (when not using TripleO though)

So, after some analysis, and despite the fact that I really prefer Ansible over Puppet, I decided (so that it could still make sense in our infra) to go the "puppet modules way". That was the beginning of a journey, where I saw a lot of Yaks to shave too.

It started with me trying to "just" reuse and adapt some existing modules I found. Wrong. And it's even fun because it's one of my mantras : "Don't try to automate what you can't understand from scratch" (And I fully agree with Matthias' thought on this ).

So one can just read all the openstack puppet modules, and then try to understand how to assemble them together to build a cloud. But I remembered that Packstack itself is puppet driven. So I just decided to have a look at what it was generating and start from that to write my own module from scratch. How to proceed ? Easy : on a VM, just install packstack, generate answer file, "salt" it your needs, and generate the manifests :

yum install -y centos-release-openstack-ocata && yum install openstack-packstack -y packstack --gen-answer-file=answers.txt vim answers.txt packstack --answer-file=answers.txt --dry-run * The installation log file is available at: /var/tmp/packstack/20170508-101433-49cCcj/openstack-setup.log * The generated manifests are available at: /var/tmp/packstack/20170508-101433-49cCcj/manifests

So now we can have a look at all the generated manifests and start from scratch our own, reimporting all the needed openstack puppet modules, and that's what I did .. but started to encounter some issues. The first one was that the puppet version we were using was 3.6.2 (everywhere on every release/arch we support, so centos 6 and 7, and x86_64,i386,aarch64,ppc64,ppc64le).

One of the openstack component is RabbitMQ but openstack modules rely on the puppetlabs module to deploy/manage it. You'll see a lot of those external modules being called/needed by openstack puppet. The first thing that I had to do was investigating our own modules as some are the same name, but not coming from puppetlabs/forge, so instead of analyzing all those, I moved everything RDO related to a different environment so that it wouldn't conflict with some our our existing modules. Back now to the RabbitMQ one : puppet errors where trying to just use it. First yak to shave : updating the whole CentOS infra puppet to higher version because of a puppet bug. Let's so rebuild puppet for centos 6/7 and with a higher version on CBS

That means of course testing our own modules, on our Test Foreman/puppetmasterd instance first, and as upgraded worked, I applied it everywhere. Good, so let's jump to the next yak.

After the rabbitmq issue was solved, I encountered other ones coming from openstack puppet modules now, as the .rb ruby code used for type/provider was expecting ruby2 and not 1.8.3, which was the one available on our puppetmasterd (yeah, our Foreman was on a CentOS 6 node) so another yak to shave : migrating our Foreman instance from CentOS 6 to a new CentOS 7 node. Basically installing a CentOS 7 node with the same Foreman version running on CentOS 6 node, and then following procedure, but then, again, time lost to test update/upgrade and also all other modules, etc (One can see why I prefer agentless cfgmgmt).

Finally I found that some of the openstack puppet modules aren't touching the whole config. Let me explain why. In Openstack Ocata, some things are mandatory, like the Placement API, but despite all the classes being applied, I had some issues to have it to run correctly when deploying an instance. It's true that I initially had a bug in my puppet code for the user/password to use to configure the rabbitmq settings, but it was solved and also applied correctly in /etc/nova/nova.conf (setting "transport_url=") . But openstack nova services (all nova-*.log files btw) were always saying that credentials given were refused by rabbitmq, while tested manually)

After having verified in the rabbitmq logs, I saw that despite what was configured in nova.conf, services were still trying to use the wrong user/pass to connect to rabbitmq. Strange as ::nova::cell_v2::simple_setup was included and was supposed also to use the transport_url declared at the nova.conf level (and so configured by ::nova) . That's how I discovered that something "ugly" happened : in fact even if you modify nova.conf, it stores some settings in the mysql DB, and you can see those (so the "wrong" ones in my case) with :

nova-manage cell_v2 list_cells --debug

Something to keep in mind, as for initial deployment, if your rabbitmq user/pass needs to be changed, and despite the fact that puppet will not complain, it will only update the conf file, but not the settings imported first by puppet in the DB (table nova_api.cell_mapping if you're interested) After that, everything was then running, and reinstalled/reprovisioned multiple times my test nodes to apply the puppet module/manifests from puppetmasterd to confirm.

That was quite a journey, but it's probably only the beginning but it's a good start. Now to investigate other option for cinder/glance as it seems Gluster was deprecated and I'd like to know hy.

Hope this helps if you need to bootstrap openstack with puppet !

Kategóriák: Informatika

CentOS Seven blog: Updated CentOS Vagrant Images Available (v1704.01)

2017, május 3 - 15:32

We are pleased to announce new official Vagrant images of CentOS Linux 6.9 and CentOS Linux 7.3.1611 for x86_64, featuring updated packages to 30 April 2017 and the following changes:

  • kdump has been removed from the images, since it needs to reserve 160MB + 2bits/4kB RAM for the crash kernel, and automatic allocation only works on systems with at least 2GB RAM
Known Issues
  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile: config.vm.synced_folder ".", "/vagrant", type: "virtualbox"

    . Please note that there is a bug in VirtualBox 5.1.20 that prevents vagrant-vbguest from working.
    We recommend using NFS instead of VirtualBox shared folders if possible. You can also use the vagrant-sshfs plugin, which, unlike NFS, works on all operating systems.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile, to prevent errors on "vagrant up".

  3. Vagrant 1.8.5 is unable to create new CentOS Linux boxes due to Vagrant bug #7610
  4. Vagrant 1.8.7 is unable to download or update boxes due to Vagrant bug #7969.
  5. Vagrant 1.9.1 broke private networking, see Vagrant bug #8166
  6. Vagrant 1.9.3 doesn't work with SMB sync due to Vagrant bug #8404
  7. The vagrant-libvirt plugin is only compatible with Vagrant 1.5 to 1.8
  8. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools (updated for this release).
Recommended Setup on the Host

Our automatic testing is running on a CentOS Linux 7 host, using Vagrant 1.9.3 with vagrant-libvirt and VirtualBox 5.1.20 (without the Guest Additions) as providers. We strongly recommend using the libvirt provider when stability is required.

We also performed additional manual testing with Vagrant 1.9.4 on OS X 10.11.6, with VirtualBox 5.1.20.

Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

vagrant box add centos/6 # for CentOS Linux 6, or... vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

vagrant box update --box centos/6 vagrant box update --box centos/7 Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc $ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum f8cd95ce24fd9f615dd38bbf8b6c285a916a4cac1d98ada4ab16d6626468032b --provider libvirt --box-version 1704.01 centos/7

Unfortunately, this is not possible with vagrant box update.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to thank Fabian Arrotin and Thomas Oulevey for their work on the build infrastructure.

We would also like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.
Kategóriák: Informatika

Theme by me