Jelenlegi hely

CentOS.org

Feliratkozás CentOS.org hírcsatorna csatornájára
Planet CentOS - http://planet.centos.org/
Frissítve: 1 nap 18 óra

Fabian Arrotin: Enabling SPF record for centos.org

2017, január 17 - 00:00

In the last weeks, I noticed that spam activity was back, including against centos.org infra. One of the most used technique was Email Spoofing (aka "forged from address"). That's how I discovered that we never implemented SPF for centos.org (while some of the Infra team members had that on their personal SMTP servers).

While SPF itself is "just" a TXT dns record in your zone, you have to think twice before implementing it. And publishing yourself such a policy doesn't mean that your SMTP servers are checking SPF either. There are PROS and CONS to SPF so read first multiple sources/articles to understand how it will impact your server/domain when sending/receiving :

sending

The first thing to consider is how people having an alias can send send their mails : either behind their known MX borders (and included in your SPF) or through alternate SMTP servers relaying (after being authorized of course) through servers listed in your SPF.

One thing to know with SPF is that it breaks plain forwarding and aliases but it's not how you will setup your SPF record, but how originator domain does it : For example if you have joe@domain.com sending to joe@otherdomain.com itself being an alias to joe2@domain.com, that will break, as MX for domain.com will see that a mail for domain.com was 'sent' from otherdomain.com and not from an IP listed in their SPF. There are workaround for this though, aka remailing and SRS

receiving

So you have a SPF in place and so restrict from where you are sending mails ? Great, but SPF only works if other SMTP servers involved are checking for it, and so you should do the same ! The fun part is that even if you have CentOS 7, and so Postfix 2.10, there is nothing by default that let you verify SPF : as stated on this page :

Note: Postfix already ships with SPF support, in the form of a plug-in policy daemon. This is the preferred integration model, at least until SPF is mandated by standards.

So for our postfix setup, we decided to use pypolicy-spf : lightweight, easy , written in python. The needed packages are already available in Epel, but we also rebuilt it on CBS. Once installed, configured and integrated with Postfix, you'll start (based on your .conf settings) blocking mail that arrives to your SMTP servers, but from IP/servers not listed in the originator domain SPF policy (if any).

If you have issues with our SPF current policy on centos.org, feel free to reach us in #centos-devel on irc.freenode.net to discuss it.

Kategóriák: Informatika

Karanbir Singh: create a new github.com repo from the cli

2017, január 14 - 00:00

I often get into a state where I’ve started some work, done some commits etc and then realised I dont have a place to push the code to. Getting it on github has involved getting the browser out, login to github, click click click {pain}. So, here is how you can create a new repo for your login name on github.com without moving away from the shell.


curl -H "X-GitHub-OTP: XXXXX" -u 'LoginName' https://api.github.com/user/repos -d '{"name":"Repo-To-Create"}'

You need to supply your OTP pass and replace XXXX with it, and ofcourse your own LoginName and finally the Repo-To-Create. Once this call runs, curl will ask for your password and you should get the github API dump a bunch of details ( or tell you that it failed, in which case you need to check the call ).

now the usual ‘git remote add github git@github.com:LoginName/Repo-To-Create‘ and you are off.

regards,

Kategóriák: Informatika

Fabian Arrotin: Music recording on CentOS 7 DAW

2017, január 5 - 00:00

There was something that was on my (private) TODO list for quite some time now : being able to record music, mix and output a single song from multiple recorded tracks. For that you need a Digital Audio Workstation DAW.

I have several instruments at home (electric guitars, bass, digital piano and also drums) but due to lack of (free) time I never investigated the DAW part on Linux and especially CentOS. So having some "offline" days during the holidays helped me investigating that and also being able to setup a small DAW on a recycled machine. Let's consider the hardware and software parts.

Hardware support

I personally still own a Line6 TonePort UX2 interface which is now more than 10 years old, and that I used in the past on a iMac. The iMac still runs, but exclusively with CentOS 7 those days, and the TonePort was just collecting dust. When I tried to plug it , it wasn't really detected, but just mainly because of the kernel config, so I asked (gently) Toracat to enable the required kernel module in the centos-plus kernel and with the centos-plus kernel, the toneport ux2 is seen as an external sound card. Good :

geonosis kernel: usb 3-2: new full-speed USB device number 2 using xhci_hcd geonosis kernel: usb 3-2: New USB device found, idVendor=0e41, idProduct=4142 geonosis kernel: usb 3-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 geonosis kernel: usb 3-2: Product: TonePort UX2 geonosis kernel: usb 3-2: Manufacturer: Line 6 geonosis mtp-probe: checking bus 3, device 2: "/sys/devices/pci0000:00/0000:00:14.0/ usb3/3-2" geonosis mtp-probe: bus: 3, device: 2 was not an MTP device geonosis kernel: line6usb: module is from the staging directory, the quality is unkn own, you have been warned. geonosis kernel: line6usb 3-2:1.0: Line6 TonePort UX2 found geonosis kernel: line6usb: module is from the staging directory, the quality is unkn own, you have been warned. geonosis kernel: line6usb 3-2:1.0: Line6 TonePort UX2 now attached geonosis kernel: line6usb 3-2:1.1: Line6 TonePort UX2 found geonosis kernel: usbcore: registered new interface driver line6usb geonosis kernel: usbcore: registered new interface driver snd_usb_toneport

I also recently offered myself a small gift to play with : a small Fender Mustang guitar amplifier : small enough to fit under my desk in my home office, and with amps/effects emulation built-in, plus usb output to redirect sound directly to computer. (Easier for easy recording, than setting up a microphone in front my other Fender Custom vibrolux reverb tube amp, and my neighbors are also grateful for that decision)

The good news is that it's directly recognized as another sound card without any kernel module to activate/enable :

geonosis kernel: usb 3-1: new full-speed USB device number 3 using xhci_hcd geonosis kernel: usb 3-1: New USB device found, idVendor=1ed8, idProduct=0014 geonosis kernel: usb 3-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 geonosis kernel: usb 3-1: Product: Mustang Amplifier geonosis kernel: usb 3-1: Manufacturer: FMIC geonosis kernel: usb 3-1: SerialNumber: 05D7FF373837594743075518 geonosis kernel: hid-generic 0003:1ED8:0014.0001: hiddev0,hidraw0: USB HID v1.10 Device [FMIC Mustang Amplifier] on usb-0000:00:14.0-1/input0 geonosis mtp-probe: checking bus 3, device 3: "/sys/devices/pci0000:00/0000:00:14.0/usb3/3-1" geonosis mtp-probe: bus: 3, device: 3 was not an MTP device geonosis kernel: usbcore: registered new interface driver snd-usb-audio

With those two additional sound cards detected, it looks now like this :

[arrfab@geonosis ~]$ cat /proc/asound/cards 0 [PCH ]: HDA-Intel - HDA Intel PCH HDA Intel PCH at 0xd2530000 irq 32 1 [TonePortUX2 ]: line6usb - TonePort UX2 Line6 TonePort UX2 at USB 3-2:1.0 2 [Amplifier ]: USB-Audio - Mustang Amplifier FMIC Mustang Amplifier at usb-0000:00:14.0-1, full speed

Great, now let's have a look at the software part !

Software

There are multiple ways to record quickly any sound from a sound card on Linux, and Audacity is well known for this, as it comes with several effects, you can import, edit , cut, paste (and more !) quickly sounds (and even multiple tracks). But when it comes to music recording, especially if you want to also play with MIDI , you need a proper sequencer. It's really great to see that on Linux you have multiple alternatives, but one that seems to be very popular in the Free and open source world is Ardour. As nothing was built for CentOS 7, I decided to create a DAW-7 COPR repository that has everything I need ( when combined with EPEL and/or Nux-Dextop )

I so (re)built (thanks to upstream Fedora maintainers !) in that copr repository multiple packages, including (but not limited to) :

  • Ardour 5.5 : sequencer
  • Qjackctl : frontend for needed jack-audio-connection-kit
  • Calf : very good effects/plugins for jack and so that can be used directly within ardour
  • LV2 : other effects/plugins
  • Guitarix : guitar/bass amp+effect simulator
  • LMMS : other sequencer but more oriented for midi/loops but not really audio recording from external devices
  • Hydrogen : Drum machine when you can't record real drums but you can program your own pattern[s]
  • ... and much more ... :-)

After having tested multiple settings (there are a lot to learn around this), I found myself comfortable with this :

sudo su -c 'curl https://copr.fedorainfracloud.org/coprs/arrfab/DAW-7/repo/epel-7/arrfab-DAW-7-epel-7.repo > /etc/yum.repos.d/arrfab-daw.repo' sudo yum install -y ardour5 calf lmms hydrogen qjackctl jack-audio-connection-kit jack_capture guitarix lv2-abGate lv2-calf-plugins lv2-drumgizmo lv2-drumkv1 lv2-fomp-plugins lv2-guitarix-plugins lv2-invada-plugins lv2-vocoder-plugins lv2-x42-plugins fluid-soundfont-gm fluid-soundfont-gs

One thing that you have to know (but read all the tutorials/documentation around this) is that your user needs to be part of the jackuser and audio groups in Linux to be able to use the needed Jack sound server (which you have to also master, but once you understand it, it's just a virtual view of what you'd need to do with real cables plugging in/out into various hardware elements) :

sudo usermod --groups jackuser,audio --append $your_username

One website I recommend you to read is LibreMusicProduction as it has tons of howtos and also video tutorials about Ardour and other settings. Something else worth mentioning if you just want drum loops : you can find some on Google but I found some real good ones to start with (and licensed in a way that you can reuse those) on Looperman, Drumslive and Freesound.

Who said that CentOS 7 was only for servers in datacenters and the Cloud ? :-)

Have fun on your CentOS 7 DAW.

Kategóriák: Informatika

CentOS Seven blog: Updated CentOS Vagrant Images Available (v1611.01)

2016, december 16 - 16:17

We are pleased to announce new official Vagrant images of CentOS Linux 6.8 and CentOS Linux 7.3.1611 for x86_64, featuring updated packages to 15 December 2016, as well as the following user-visible changes:

  • the size of the boot partition has been increased to 1GB in centos/7, to conform with the new upstream recommendations
  • the centos/7 image is now based on CentOS Linux 7.3.1611
Known Issues
  1. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to their Vagrantfile: config.vm.synced_folder “.”, “/vagrant”, type: “virtualbox”

    We recommend using NFS instead of VirtualBox shared folders if possible.

  2. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line config.vm.synced_folder ".", "/vagrant", disabled: true

    to their Vagrantfile.

  3. Please use Vagrant 1.8.6 or 1.9.0 (version 1.8.5 is unable to create new Linux boxes due to Vagrant bug #7610, while version 1.8.7 is unable to download or update boxes due to Vagrant bug #7969).
  4. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools.
Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

$ vagrant box add centos/6 # for CentOS Linux 6 $ vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

$ vagrant box update --box centos/6 $ vagrant box update --box centos/7

If you are using CentOS Linux on the host, we recommend installing Vagrant from SCL and using the libvirt images. In general, the Vagrant packages provided by your Linux distribution are preferable, since they usually backport fixes for some upstream bugs. If you are using Vagrant on other operating systems, please use Vagrant 1.8.6 or 1.9.0 (see Known issues, item 3).

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc $ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum 74a95be409cef813881f5312dc1221e2559cdbf25f45d5234d81e91632f99cce --provider libvirt --box-version 1610.01 centos/7

Unfortunately, this is not possible with vagrant box update.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

We would like to thank the following people (listed alphabetically):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.
Kategóriák: Informatika

CentOS Seven blog: Updated CentOS Vagrant Images Available (v1610.01)

2016, november 15 - 18:23

Official Vagrant images for CentOS Linux 6.8 and CentOS Linux 7.2.1511 for x86_64 are now available for download, featuring updated packages to 30 October 2016, as well as the following user-visible changes:

  • several optimisations to make the images smaller and faster:
    • do not install most firmware packages
    • do not install microcode_ctl
    • do not build a rescue initramfs (resulting in significantly faster kernel updates)
    • do not load the floppy module on centos/7 (this reduces boot time by ca. 5s)
  • [security]: do not allow regular users to use su to become root or vagrant – see issue #76
  • set the SELinux type of /etc/sudoers.d/vagrant to etc_t
Known Issues
  1. The centos/7 image is based on CentOS Linux 7.2.1511, since CentOS Linux 7.3 is not available yet.
  2. The VirtualBox Guest Additions are not preinstalled; if you need them for shared folders, please install the vagrant-vbguest plugin and add the following line to your Vagrantfile: config.vm.synced_folder “.”, “/vagrant”, type: “virtualbox”

    We recommend using NFS instead of VirtualBox shared folders if possible.

  3. Since the Guest Additions are missing, our images are preconfigured to use rsync for synced folders. Windows users can either use SMB for synced folders, or disable the sync directory by adding the line config.vm.synced_folder ".", "/vagrant", disabled: true

    to your Vagrantfile.

  4. Please use Vagrant 1.8.6 (version 1.8.5 is unable to create new Linux boxes due to Vagrant bug #7610, while version 1.8.7 is unable to download or update boxes due to Vagrant bug #7969).
  5. Installing open-vm-tools is not enough for enabling shared folders with Vagrant’s VMware provider. Please follow the detailed instructions in https://github.com/mvermaes/centos-vmware-tools.
Downloads

The official images can be downloaded from Hashicorp’s Atlas. We provide images for libvirt-kvm, VirtualBox and VMware.

If you never used our images before:

$ vagrant box add centos/6 # for CentOS Linux 6 $ vagrant box add centos/7 # for CentOS Linux 7

Existing users can upgrade their images:

$ vagrant box update --box centos/6 $ vagrant box update --box centos/7

If you are using CentOS Linux on the host, we recommend installing Vagrant from SCL and using the libvirt images. In general, the Vagrant packages provided by your Linux distribution are preferable, since they usually backport fixes for some upstream bugs. If you are using Vagrant on other operating systems, please use Vagrant 1.8.6 (see Known issues, item 4).

Verifying the integrity of the images

The SHA256 checksums of the images are signed with the CentOS 7 Official Signing Key. First, download and verify the checksum file:

$ curl http://cloud.centos.org/centos/7/vagrant/x86_64/images/sha256sum.txt.asc -o sha256sum.txt.asc $ gpg --verify sha256sum.txt.asc

If the check passed, you can use the corresponding checksum when downloading the image with Vagrant:

$ vagrant box add --checksum-type sha256 --checksum ce12f84646efab28b007bdf16f3134686a23fa052f809c4600919561274051da --provider libvirt --box-version 1610.01 centos/7

Unfortunately, this is not possible with vagrant box update.

Feedback

If you encounter any unexpected issues with the Vagrant images, feel free to ask on the centos-devel mailing list, or via IRC, in #centos on Freenode.

Ackowledgements

Some of the optimisations in this release were inspired by the Vagrant images from Fedora Cloud and Debian Cloud.

We would also like to thank the following people (in alphabetical order):

  • Graham Mainwaring, for helping with tests and validations
  • Michael Vermaes, for testing our official images, as well as for writing the detailed guide to using them with VMware Fusion Pro and VMware Workstation Pro.
Kategóriák: Informatika

CentOS Seven blog: Introducing CentOS Container Image Scanners

2016, november 10 - 19:52

Over past few months, we’ve been working on CentOS Community Container Pipeline which aims to help developers focus on what they love doing most – write awesome code – and sysadmins have an insight into the image by providing metadata about it! The project code is hosted at Github.com since its inception. The hosted service, that runs off this code, is available to the community at large, and delivers content to registry.centos.org.
What is CentOS Community Container Pipeline?

CentOS Community Container Pipeline enables developers and sysadmins to have a container images built, tested and scanned on the CentOS Project’s infrastructure right after a developer pushes code to the git repository!

Container Pipeline Flow

Once the developer pushes code to git repo, Container Pipeline fetches the changes and container images are built using OpenShift which provides an enterprise distribution of Kubernetes project. Once the image is built, it gets scanned using atomic scanners (more on this soon!). The result of these scanners is combined into a mail and sent to the author of the container image. Container images can also be tested using the user provided test scripts to ensure that container can be spinned off the image on platforms like CentOS Linux, CentOS Atomic Host and OpenShift.

Why scan images?

Building container images and spinning containers is rather simple. Having more information a.k.a metadata about the container images before running them in one’s production environment is of paramount value! Of course, the kind of information is what makes it of paramount or negligible value. That’s what we aim to provide with CentOS Community Container Pipeline.

Scanners in CentOS Community Container Pipeline

At this point we have two scanners operational. One that checks your CentOS Linux based container images for package updates and other that verifies them. Both the scanners are based on atomic tool developed by the Project Atomic folks. We are working on rolling out more scanners in near future!

Atomic Scanner

The scanners based on atomic are run automatically by the Pipeline after successful completion of image building process. These scanners can be run stand-alone as well! That is, you can install the scanner on your CentOS Linux based systems and run it against a container image built on CentOS Linux base image. And it does this without bringing up or executing the container itself.

In the pipeline, upon completion of scan process, the user is notified about issues with the image that need to be addressed. Addressing these issues would instill more confidence in deploying the resulting container image in a production environment.

Besides scanning an image after it is built, in near future, scanners would also run periodically and provide developer with the actionable information.

yum update scanner

This scanner provides user with the information about RPM packages that need to be updated in the container image. If you’re a developer this information is helpful to ensure you’re running latest packages with bug and security fixes to avoid having surprises in production.

Example output:

$ atomic scan --scanner pipeline-scanner --rootfs /mnt registry.centos.org/centos/centos ... Files associated with this scan are in /var/lib/atomic/pipeline-scanner/2016-11-10-10-30-46-609885.

Scanner ran succesfully and has stored the scan data under /var directory. Let’s see the output:

$ cat /var/lib/atomic/pipeline-scanner/2016-11-10-10-30-46-609885/_mnt/image_scan_results.json { "Scanner": "pipeline-scanner", "Successful": "true", "Start Time": "2016-11-10-10-42-46-265018", "Scan Results": { "Package Updates": [ "bind-license.noarch", "kmod.x86_64", "kmod-libs.x86_64", "kpartx.x86_64", "openssl-libs.x86_64", "python.x86_64", "python-libs.x86_64", "systemd.x86_64", "systemd-libs.x86_64", "tzdata.noarch" ], "OS Release": "CentOS Linux 7 (Core)" }, "Scan Type": "Image Scan", "CVE Feed Last Updated": "NA", "Finished Time": "2016-11-10-10-42-52-184442", "UUID": "mnt" }

The Package Updates key in above output lists packages that need to be updated in the scanned container image.

RPM verify scanner

As its name suggests RPM verify scanner verifies all installed files (libraries and binaries) via RPM packages in given container image. It reports any modified or tampered libraries and binaries in given container image. This is useful to ensure that given container image is not shipped with any tainted libraries or binaries.

Example output:

$ atomic scan --scanner rpm-verify docker.io/centos/postgresql { "Scanner": "scanner-rpm-verify", "Successful": "true", "Start Time": "2016-11-10-19-49-06-740445", "Scan Results": { "rpmVa_issues": [ { "config": false, "issue": "missing", "rpm": {Once the developer pushes code to git repo, Container Pipeline fetches the changes and container images are built using OpenShift which provides an enterprise version of Kubernetes project. Once the image is built, it gets scanned using atomic scanners (more on this soon!). Container images can also be tested using the user provided test scripts to ensure that container can be spinned off the image on platforms like CentOS Linux, CentOS Atomic Host and OpenShift. "VENDOR": "CentOS", "PACKAGER": "CentOS BuildSystem ", "BUILDHOST": "worker1.bsys.centos.org", "RPM": "glibc-2.17-55.el7_0.1.x86_64", "SIGNATURE": "RSA/SHA256, Sat Aug 30 02:20:20 2014, Key ID 24c6a8a7f4a80eb5" }, "filename": "/sbin/sln" }, { "config": false, "issue": "........P", "rpm": { "VENDOR": "CentOS", "PACKAGER": "CentOS BuildSystem ", "BUILDHOST": "worker1.bsys.centos.org", "RPM": "iputils-20121221-6.el7.x86_64", "SIGNATURE": "RSA/SHA256, Fri Jul 4 07:38:44 2014, Key ID 24c6a8a7f4a80eb5" }, "filename": "/usr/sbin/clockdiff" } ] }, "Scan Type": "RPM Verify scan for finding tampered files.", "CVE Feed Last Updated": "NA", "Finished Time": "2016-11-10-19-49-10-933952", "UUID": "da4ffaac638fada8723c6721721d99b0dfaba67d79c8507e881ee8327e17ecb" }

Adding your container to the pipeline

It’s simple! Add an entry for your opensource project under index.d directory on CentOS Container Index. You can see a few files representing projects or individual developers under this directory already. Also, you need to have a cccp.yml file in your project that has information useful for the Container Pipeline to use. You can refer respective GitHub repos to get more information. Or get in touch with us on #centos-devel IRC channel on FreeNode network.

Dharmit Shah and Navid Shaikh

Kategóriák: Informatika

Karanbir Singh: Welcoming new members to the CentOS Container team

2016, november 7 - 15:08

Join me in warmly welcoming Dharmit Shah, Bama Charan Kundu and Navid Shaikh to the CentOS Container team.

They are primarily focused on delivering and curating the CentOS Container Pipeline (https://github.com/centos/container-pipeline-service ). In the coming weeks, keep an eye out for announcements from them in the CentOS Blog at https://seven.centos.org .

– KB

Kategóriák: Informatika

Karanbir Singh: Vim 8 for CentOS Linux 7

2016, november 5 - 04:40

Matěj Cepl is curating a set of Vim 8 rpms for EL7 over at
https://copr.fedorainfracloud.org/coprs/mcepl/vim8/ – consider them testing grade, and I am sure he would appreciate feedback and issue reports.

Now go get the shinny new Vim8.

$ rpm -q vim-enhanced
vim-enhanced-8.0.0054-1.0.8.el7.centos.x86_64

Enjoy! And dont forget to drop by and say thanks to Matěj over at https://matej.ceplovi.cz/blog/

Kategóriák: Informatika

Karanbir Singh: Security contact for the CentOS Project

2016, október 27 - 21:52

If you find any security issue in a CentOS.org website or service, please let us know; the same goes for any issues in CentOS Linux as well as the SIG content on centos.org. And the best way to get in touch is to email security@centos.org – and if the content is sensitive, please use the corrosponding gpg key to encrypt the content with. eg for CentOS Linux 7 specific issue, please encrypt the content with the CentOS Linux 7 key. Similarly for any content specific to the Virt SIG, please use the CentOS SIG Virt key.

How can you verify the keys ? The fingerprints are published behind https at https://www.centos.org/keys/.

DNS data for the www.centos.org website is :
www.centos.org has address 85.12.30.226
www.centos.org has IPv6 address 2a01:788:a002:0:225:90ff:fe33:f34c

– KB

Kategóriák: Informatika

Karanbir Singh: Adding a timeout for your CI jobs at ci.centos.org

2016, október 26 - 23:10

The typical workflow for most ci.centos.org ( cico ) jobs is :

* Call Duffy's API endpoint with node/get and grab some machines
* Setup the machines environment for the ci job to come
* Push content to nodes
* Run the tests
* Clear out / tear down
* Call Duffy's API end point with node/done to return the machines
* Report status via Jenkins

Machines handed out in this manner to the CI Jobs are available for upto 6 hours at a time, at which point they are reaped back into the available pool for other jobs to consume. What this also means is that if for any reason, the job gets stuck, it could be upto six hours before the developer/user gets any feedback about the tests failing.

The usual way to resolve this situation is to setup a timeout in the jenkins job. That would allow Jenkins to watch for the run, on timeout, kill the job and report failure. However, if your job is setup with a single build step that also includes requesting the machines and returning them when done, Jenkins killing the job will mean your machines wont get returned for upto 6 hrs. Given that most projects are setup with a quota of 10 deployed machines; not returning them when done, would mean your jobs get put into a queue that isnt clearing out in a rush.

One way to work around this would be to split the machine request and machine return functions into a pre-build and post-build step, and then pass over the session-id for the deployed machines via the build steps. That way, you could trap and report specific conditions. A varioation of this would be to setup conditional build steps, and have them execute different functions as needed.

An easy and simple workaround however, is to just wrap the test commands in a /usr/bin/timeout call. timeout is delivered as a binary from the coreutils package on CentOS Linux 7 and would be available on all machines, including the jenkins worker instances. Take a look at https://github.com/almighty/almighty-jobs/blob/master/devtools-ci-index.yaml#L64 for a quick example of how this would work in a JJB template. This way we can timeout on the job, and yet be able to return nodes or handle any other content we need, in the same ci job script. A script that then does not have or need any Jenkins specific content, making it possible to run from developer laptops or as child jobs on its own.

/usr/bin/timeout ( man 1 timeout ) also allows you to preserve the sub commands exit status, if you need to track and report different status from your ci jobs. And ofcourse, there are many other uses for /usr/bin/timeout as well!

– KB

Kategóriák: Informatika

Fabian Arrotin: (ab)using Alias for Zabbix

2016, október 21 - 00:00

It's not a secret that we use Zabbix to monitor the CentOS.org infra. That's even a reason why we (re)build it for some other architectures, including aarch64,ppc64,ppc64le on CBS and also armhfp

There are really cool things in Zabbix, including Low-Level Discovery. With such discovery, you can create items/prototypes/triggers that will be applied "automagically" for each discovered network interface, or mounted filesystem. For example, the default template (if you still use it) has such item prototypes and also graph for each discovered network interface and show you the bandwidth usage on those network interfaces.

But what happens if you suddenly want to for example to create some calculated item on top of those ? Well, the issue is that from one node to the other, interface name can be eth0, or sometimes eth1, and with CentOS 7 things started to also move to the new naming scheme, so you can have something like enp4s0f0. I wanted to create a template that would fit-them-all, so I had a look at calculated item and thought "well, easy : let's have that calculated item use a user macro that would define the name of the interface we really want to gather stats from ..." .. but it seems I was wrong. Zabbix user macros can be used in multiple places, but not everywhere. (It seems that I wasn't the only one not understanding the doc coverage for this, but at least that bug report will have an effect on the doc to clarify this)

That's when I discussed this in #zabbix (on irc.freenode.net) that RichLV pointed me to something that could be interesting for my case : Alias. I must admit that it's the first time I was hearing about it, and I don't even know when it landed in Zabbix (or if I just overlooked it at first sight).

So cool, now I can just have our config mgmt pushing for example a /etc/zabbix/zabbix_agentd.d/interface-alias.conf file that looks like this and reload zabbix-agent :

Alias=net.if.default.out:net.if.out[enp4s0f0] Alias=net.if.default.in:net.if.in[enp4s0f0]

That means that now, whatever the interface name will be (as puppet in our case will create that file for us) , we'll be able to get values from net.if.default.out and net.if.default.in keys, automatically. Cool

That also means that if you want to aggregate all this into a single key for a group of nodes (and so graph that too), you can do something always referencing those new keys (example for the total outgoing bandwidth for a group of hosts) :

grpsum["Your group name","net.if.default.out",last,0]

And from that point, you can easily also configure triggers, and graphs too. Now going back to work on some other calculated items for total bandwith usage for a period of time and triggers based on some max_bw_usage user macro.

Kategóriák: Informatika

Johnny Hughes: CentOS-7 1609 Rolling ISOs Now Live

2016, október 1 - 09:01
Rolling ISOsThe CentOS Linux team produces rolling CentOS-7 isos, normally on a monthly basis.

The most recently completed version of those ISOs are version 1609 (16 is for 2016, 09 is for September).

The team usually creates all our ISO and cloud images based on all updates through the 28th of the month in question .. so 1609 would mean these ISOs will contain all updates for CentOS-7 through September 28th, 2016.

These rolling ISOs have the same installer as the most recent CentOS-7 point release (currently 7.2.1511) so that they install on the same hardware as our original ISOs, while the packages installed are the latest updates.

This means that the actual kernel that boots up on the ISO is the 7.2.1511 default kernel (kernel-3.10.0-327.el7.x86_64.rpm), but that the kernel installed is the latest kernel package (kernel-3.10.0-327.36.1.el7.x86_64.rpm for the 1609 ISOs).

These normal Rolling ISOs can be downloaded from this LINK and here are the sha256sums:
CentOS-7-x86_64-DVD-1609-01.iso:
3948f7a31a8693b2df90dc31252551dcd5aa9d13f4910ad9ce28fcddc247d84f 

CentOS-7-x86_64-Everything-1609-01.iso:
602383c2aa93f6d7df46bd47658dcbf9b9d567108dec56ba60ce46a2f51c6eb2 

CentOS-7-x86_64-LiveGNOME-1609-01.iso:
f6ee8af6814bc58e2c8424db862a443649f3a57b5f85caf63704ab52d5bbac68 

CentOS-7-x86_64-LiveKDE-1609-01.iso:
1349c70e815d46c49d6ea459de6fbc074f5131c803343db18d32987ee78fd303 

CentOS-7-x86_64-Minimal-1609-01.iso:
54721e5e444a3191b18b0fabe1c35e12b65f93aa31732beb7899212d19cab69b 

You can verify the sha256sum of your downloaded ISO following these instructions prior to install.

The DVD ISO contains everything needed to do an install, but still fits on one 4.3 GB DVD.  This is the most versatile install that will fit on a single DVD and if you are new to CentOS this likely the installer you want.  If you pick Minimum Install in this installer, you can do an install that is identical to Minimal ISO.  You can also install many different Workstation and Server installs from this ISO, including both GNOME and KDE.

The Everything ISO has all packages, even those not used by the installer.  You usually do not need this ISO unless you do not have access to the internet and want to install things later from this DVD and not included by the graphical installer.  Most users will not need this ISO, it is > 7 GB but can do installs from a USB key that is big enough to hold it (currently an 8 GB key).

The LiveGNOME ISO is a Basic GNOME Workstation install, but there is no modification or personalization allowed during the install.  It is a much easier install to do, but any extras packages must be installed from the internet later.

The LiveKDE ISO is Basic KDE Workstation install.  It also does not allow modification or personalization until after the install has finished.

The Minimal ISO is a very small and quick install that boots to the command console and has network connectivity and a firewall.  It is used by System Administrators for the minimal install that they can then add functionality to.  You need to know what you are doing to use this ISO.

Newer Hardware SupportAs explained above, the normal rolling ISOs boot from the Point Release installer.  Sometimes there is newer hardware that might not be supported in the point release installer, but could be supported with a newer kernel.  This installer is much less tested and is only recommended if you can not get one of the normal installers to work for you.

There are only 2 ISOs in this family, here are the links and sha256sums:
CentOS-7-x86_64-DVD-1609-99.iso:
90c7148ddccbb278d45d06805dee6599ec1acc585cafd02d56c6b8e32a238fa9 

CentOS-7-x86_64-Minimal-1609-99.iso:
1cfbbc73cc7a0eb17d7fe2fa5b1adf07492e340540603e8e1fd28b52e95f02e3

You can verify the ISO's sha256 sum using this LINK, and the descriptions above are the same for these two ISOs.


Kategóriák: Informatika

Fabian Arrotin: CentOS Infra public service dashboard

2016, szeptember 22 - 00:00

As soon as you're running some IT services, there is one thing that you already know : you'll have downtimes, despite all your efforts to avoid those...

As the old joke says : "What's up ?" asked the Boss. "Hopefully everything !" answered the SysAdmin guy ....

You probably know that the CentOS infra is itself widespread, and subject to quick move too. Recently we had to announce an important DC relocation that impacts some of our crucial and publicly facing services. That one falls in the "scheduled and known outages" category, and can be prepared. For such "downtime" we always announced that through several mediums, like sending a mail to the centos-announce, centos-devel (and in this case , also to the ci-users) mailing lists. But even when we announce that in advance, some people forget about it, or people using (sometimes "indirectly") the concerned service are surprized and then ask about it (usually in #centos or #centos-devel on irc.freenode.net).

In parallel to those "scheduled outages", we have also the worst ones : the unscheduled ones. For those ones, depending on the impact/criticity of the impacted service, and also the estimated RTO, we also send a mail to the concerned mailing lists (or not).

So we just decided to show a very simple and public dashboard for the CentOS Infra, but only covering the publicly facing services, to have a quick overview of that part of the Infra. It's now live and hosted on https://status.centos.org.

We use Zabbix to monitor our Infra (so we build it for multiple arches, like x86_64,i386,ppc64,ppc64le,aarch64 and also armhfp) , including through remote zabbix proxies (because of our "distributed" network setup right now, with machines all around the world). For some of those services listed on status.centos.org, we can "manually" announce a downtime/maintenance period, but Zabbix also updates on its own that dashboard. The simple way to link those together was to use zabbix custom alertscripts and you can even customize those to send specific macros and have that alertscript just parsing and then updating the dashboard.

We hope to enhance that dashboard in the future, but it's a good start, and I have to thank again Patrick Uiterwijk who wrote that tool for Fedora initially (and that we adapted to our needs).

Kategóriák: Informatika

Johnny Hughes: CentOS at cPanel 2016

2016, augusztus 15 - 12:23
The CentOS team will have a booth at the cPanel 2016 WEIRED Conference in Portland, Oregon at the Hilton Portland & Executive Tower on October 3rd through the 5th 2016.

I (Johnny Hughes) will be there to discuss all things CentOS and we may have some guests at the booth from some of our Special Interest Groups and others from the CentOS Community.

If you are planning to be at the conference, please stop by and see us.
Kategóriák: Informatika

Johnny Hughes: CentOS at 2016 Texas Linux Fest

2016, június 21 - 20:25
We will have a CentOS Booth at the 2016 Texas Linux Fest on July 8th and 9th in the Austin Texas Convention Center.

Please stop by the CentOS booth for some Swag and discussion.

We will also have several operational CentOS-7 Arm32 devices at the booth, including a Raspberry Pi2, Raspberry Pi3, CubieTruck (Cubieboard3) and CubieTruck Plus (Cubieboard5).  These devices are showcasing our AltArch Special Interest Group, which produce ppc64, ppc64le, armhfp (Arm32), aarch64 Arm64), and i686 (x86 32) architectures of CentOS-7.

We also will be glad to discuss the new things happening within the project, including a number of operational Special Interest Groups (SIGs) that are producing add on software for CentOS including The Xen Hypervisor, OpenStack (via RDO), Storage (GlusterFS and Ceph), Software Collections, Cloud Images (AWS, Azure, Oracle, Vagrant Boxes, KVM), Containers (Docker and Project Atomic).

So, if you have been using CentOS for the past 12 years, all that is happening just like it always has (long lived standard Linux distro with LTS), as well as all the new hypervisor, container and cloud capabilities.
Kategóriák: Informatika

Fabian Arrotin: Generating multiple certificates with Letsencrypt from a single instance

2016, május 3 - 00:00

Recently I was discussing with some people about TLS everywhere, and we then started to discuss about the Letsencrypt initiative. I had to admit that I just tested it some time ago (just for "fun") but I suddenly looked at it from a different angle : while the most used case is when you install/run the letsencrypt client on your node to directly configure it, I have to admit that it's something I didn't want to have to deal with. I still think that proper web server configuration has to happen through cfgmgmt, and not through another process. (and same for the key/cert distribution, something for a different blog post maybe).

If so you're (pushing|pulling) automatically your web servers configuration from $cfgmgmt, but that you want to use/deploy TLS certificates signed by letsencrypt, what can you do ? Well, the good news is that you don't have to be forced to let the letsencrypt client touch your configuration at all : you can use the "certonly" option to just generate the private key locally, send the csr and get the signed cert back (and the whole chain too) One thing to know about letsencrypt is that the validation/verification process isn't the one that you can see in most of the companies providing CA/signing capabilities : as there is no ID/Paper verification (or something else) , the only validation for the domain/sub-domain that you want to generate a certificate for happens over http request (basically creating a file with a challenge , process a request from their "ACME" server[s] to retrieve that file back, and validate content)

So what are our options then ? The letsencrypt documentation mentions several plugins like manual (involves you to then create the file with the challenge answer to the webserver, then launching the validation process) , or standalone (doesn't work if you already have a httpd/nginx process as there will be a port conflict) , or even webroot (working fine as it will then just write the file itself under /.well-kwown/ under the DocumentRoot)

The webroot seems easy, but as said, we don't want to even install letsencrypt on the web server[s]. Even worse, suppose (and that's the case I had in mind) that you have multiple web nodes configured in a kind of CDN way : you don't want to distribute that file on all the nodes for validation/verification (when using the "manual" plugin) and you'd have to do it on all the nodes (as you don't know in advance which one will be verified by the ACME server)

So what about something centralized (where you'd run the letsencrypt client locally) for all your certs (including some with SANs ) in a transpartent way ? I so thought about something like this :

The idea would be to :

  • use a central node : let's call it central.domain.com (vm, docker container, make-your-choice-here) to launch the letsencrypt client
  • have the ACME server hitting transparently one of the web servers without any changed/uploaded file
  • the server getting the GET request for that file using the letsencrypt central node as a backend node
  • ACME server being happy and so signed certificates being available automatically on the centralize letsencrypt node.

The good news is that it's possible and even really easy to implement, through ProxyPass (for httpd/Apache web server) or proxy_pass (for nginx based setup)

For example, for the httpd vhost config for sub1.domain.com (three nodes in our example) we can just add this in the .conf file :

<Location "/.well-known/"> ProxyPass "http://central.domain.com/.well-known/" </Location>

So now, once in place everywhere, you can generate the cert for that domain on the central letsencrypt node (assuming that httpd is running on that node, and reachable from the "frontend" nodes, and that /var/www/html is indeed the DocumentRoot (default) for httpd on that node):

letsencrypt certonly --webroot --webroot-path /var/www/html --manual-public-ip-logging-ok --agree-tos --email you@domain.com -d sub1.domain.com

Same if you run nginx instead (let's assume this for sub2.domain.com and sub3.domain.com) , you just have to add a snippet in your vhost .conf file (and before the / definition too):

location /.well-known/ { proxy_pass http://central.domain.com/.well-known/ ; }

And then on the central node, do the same thing, but you can add multiple -d for multiple SubjectAltName in the same cert :

letsencrypt certonly --webroot --webroot-path /var/www/html --manual-public-ip-logging-ok --agree-tos --email you@domain.com -d sub2.domain.com -d sub3.domain.com

Transparent, smart, easy to do and even something you can deploy when you need to renew, and then remove to be back with initial config files too (if you don't want to have those ProxyPass directives active all the time)

The only thing you have also to know is that once you have proper TLS in place, it's usually better to redirect transpartently all requests to your http server to the https version. Most of the people will do that (next example for httpd/apache) like this :

RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L]

It's good, but when you'll renew the certificate, you'll probably just want to be sure that the GET request for /.well-known/* will continue to work over http (from the ACME server) so we can tune a little bit those rules (RewriteCond are cumulatives so it will not be redirect if url starts with .well-known:

RewriteEngine On RewriteCond $1 !^.well-known RewriteCond %{HTTPS} !=on RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L]

Different syntax, but same principle for nginx : (also snippet, not full configuration file for that server/vhost):

location /.well-known/ { proxy_pass http://central.domain.com/.well-known/ ; } location / { rewrite ^ https://$server_name$request_uri? permanent; }

Hope that you'll have found that useful, especially if you don't want to deploy letsencrypt everywhere but still use it to generate locally your keys/certs. Once done, you can then distribute/push/pull (depending on your cfgmgmt) those files and don't forget to also implement proper monitoring for cert validity and automation around that too (consider that your homework)

Kategóriák: Informatika

Fabian Arrotin: IPv6 connectivity status within the CentOS.org infra

2016, április 29 - 00:00

Recently, some people started to ask proper IPv6/AAAA record for some of our public mirror infrastructure, like mirror.centos.org, and also msync.centos.org

Reason is that a lot of people are now using IPv6 wherever possible and from a CentOS point of view, we should ensure that everybody can have content over (legacy) ipv4 and ipv6. Funny that I call ipv4 "legacy" as we still have to admit that it's still the default everywhere, even in 2016 with the available pools now exhausted.

While we had already some AAAA records for some of our public nodes (like www.centos.org as an example), I started to "chase" after proper and native ipv6 connectivity for our nodes. That's where I had to take contact with all our valuable sponsors. First thing to say is that we'd like to thank them all for their support for the CentOS Project over the years : it wouldn't have been possible to deliver multiple terrabytes of data per month without their sponsorship !

WRT ipv6 connectivity that's where the results of my quest where really different : while some DCs support ipv6 natively, and even answer you in 5 minutes when asking for a /64 subnet to be allocated , some other aren't still ipv6 ready : For the worst case the answer was "nothing ready and no plan for that" or for sometimes the received answer was something like "it's on the roadmap for 2018/2019").

The good news is that ~30% of our nodes behind msync.centos.org have now ipv6 connectivity, so the next step is now to test our various configurations (distributed by puppet) and then also our GeoIP redirection (done at the PowerDNS level for such records, for which we'll also then add proper AAAA record)

Hopefully we'll have that tested and then announced soon, and also for other public services that we're providing to you.

Stay tuned for more info about ipv6 deployment within centos.org !

Kategóriák: Informatika

Karsten Wade: EPEL round table at FOSDEM 2016

2016, január 26 - 19:57

As a follow-up to last year’s literally-a-discussion-in-the-hallway about EPEL with a few dozen folks at FOSDEM 2015, we’re doing a round table discussion with some of the same people and similar topics this Sunday at FOSDEM, “Wither EPEL? Harvesting the next generation of software for the enterprise” in the distro devroom. As a treat, Stephen Smoogen will be moderating the panel; Smooge is not only a long-time Fedora and CentOS contributor, he is one of us who started EPEL a decade ago.

If you are an EPEL user (for whatever operating system), a packager, an upstream project member who wants to see your software in EPEL, a hardware enthusiast wanting to see builds for your favorite architecture, etc. … you are welcome to join us. We’ll have plenty of time for questions and issues from the audience.

The trick is that EPEL is useful or crucial for a number of the projects now releasing on top of CentOS via the special interest group process (SIGs provide their community newer software on the slow-and-steady CentOS Linux.) This means EPEL is essential for work happening inside of the CentOS Project, but it remains a third-party repository. Figuring out all of the details of working together across the Fedora and CentOS projects is important for both communities.

Hope to see you there!

Kategóriák: Informatika

Fabian Arrotin: Kernel 3.10.0-327 issue on AMD Neo processor

2015, december 15 - 00:00

As CentOS 7 (1511) was released, I thought it would be a good idea to update several of my home machines (including kids' workstations) with that version, and also newer kernel. Usually that's just a smooth operation, but sometimes some backported features/new features, especially in the kernel, can lead to some strange issues. That's what happened for my older Thinkpad Edge : That's a cheap/small thinkpad that Lenovo did several years ago ( circa 2011 ), and that I used a lot just when travelling, as it only has a AMD Athlon(tm) II Neo K345 Dual-Core Processor. So basically not a lot of horse power, but still something convenient just to read your mails, remotely connect through ssh, or browse the web. When rebooting on the newer kernel, it panics directly.

Two bug reports are open for this, one on the CentOS Bug tracker, linked also to the upstream one. Current status is that there is no kernel update that will fix this, but there is a easy to implement workaround :

  • boot with the initcall_blacklist=clocksource_done_booting kernel parameter added (or reboot on previous kernel)
  • once booted, add the same parameter at the end of the GRUB_CMDLINE_LINUX=" .." line , in the file /etc/default/grub
  • as root, run grub2-mkconfig -o /etc/grub2.conf

Hope it can help others too

Kategóriák: Informatika

Fabian Arrotin: Kernel IO wait and megaraid controller

2015, december 1 - 00:00

Last friday, while working on something else (working on "CentOS 7 userland" release for Armv7hl boards), I got notifications coming from our Zabbix monitoring instance complaining about web scenarios failing (errors due to time outs) , and also then also about "Disk I/O is overloaded" triggers (checking the cpu iowait time). Usually you'd verify what happens in the Virtual Machine itself, but even connecting to the VM was difficult and slow. But once connected, nothing strange, and no real activity , not even on the disk (Plenty of tools for this, but iotop is helpful to see which process is reading/writing to the disk in that case), but iowait was almost at 100%).

As said, it was happening suddenly for all Virtual Machines on the same hypervisor (CentOS 6 x86_64 KVM host), and even the hypervisor was suddenly complaining (but less in comparison with the VMs) about iowait too. So obviously, it wasn't really something not being optimized at the hypervisor/VMS, but something else. That rang a bell, as if you have a raid controller, and that battery for example is to be replaced, the controller can decide to stop all read/write cache, so slowing down all IOs going to the disk.

At first sight, there was no HDD issue, and array/logical volume was working fine (no failed HDD in that RAID10 volume), so it was time to dive deeper into analysis.

That server has the following raid adapter :

03:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2108 [Liberator] (rev 03)

That means that you need to use the MegaCLI tool for that.

A quick MegaCli64 -ShowSummary -a0 showed me that indeed the underlying disk were active but I got my attention caught by the fact that there was a "Patrol Read" operation in progress on a disk. I then discovered a useful (bookmarked, as it's a gold mine) page explaining the issue with default settings and the "Patrol Read" operation. While it seems a good idea to scan the disks in the background to discover disk error in advance (PFA), the default setting is really not optimized : (from that website) : "will take up to 30% of IO resources"

I decided to stop the currently running patrol read process with MegaCli64 -AdpPR -Stop -aALL and I directly saw Virtual Machines (and hypervisor) iowait going back to normal mode. Here is the Zabbix graph for one of the impacted VM, and it's easy to guess when I stopped the underlying "Patrol read" process :

That "patrol read" operation is scheduled to run by default once a week (168h) so your real option is to either disable it completely (through MegaCli64 -AdpPR -Dsbl -aALL) or at least (adviced) change the IO impact (for example 5% : MegaCli64 -AdpSetProp PatrolReadRate 5 -aALL)

Never understimate the power of Hardware settings (in the BIOS or in that case raid hardware controller).

Hope it can help others too

Kategóriák: Informatika

Oldalak

Theme by me