In a standard way of running tests, you can’t interfere with the process to explore what’s gone wrong.
But there’s a trick: you can start the test driver in a python REPL loop, which will provide an interactive shell where you can execute your tests. This is a great way to shorten the feedback loop, as we can execute commands on our VMs. For instance, we can tell a VM to dump logs or to display the contents of files.
So, let’s explore how to run tests interactively.
To start the hello-world-server test in the interactive mode, you first need to
build the test driver by adding the .driver
attribute to the test name and
then start it manually by providing the --interactive
flag. Here’s how you do
it:
# Here we assume that our test machine is running on `x86_64-linux`, adjust this to your own architecture)
$ nix build .#checks.x86_64-linux.hello-world-server.driver
This will write out result symlink (all files are created in the nix store and we don’t want to copy them outside) pointing to the test driver. We can run the test driver like this:
./result/bin/nixos-test-driver --interactive
Note: Usually when running tests there’s no Internet access because you want things to be reproducible and self-contained. Running NixOS tests this way will allow the VM to access the Internet, which will make some services work that didn’t work previously in the nix build sandbox. Therefore, some tests will pass that were failing previously.
Inside the REPL, you can type out the Python commands to test your module. For example:
>>> node1.wait_for_unit("hello-world-server")
The API of the test driver gives you direct shell access with
<yourmachine>.shell_interact()
, so you can access the shell running inside the
guest machine.
To try it out, let’s replace the placeholder with the name of the VM defined in the test — node1:
>>> node1.shell_interact()
node1: Terminal is ready (there is no initial prompt):
$ hostname
node1
For complex modules, you may need to execute certain tests and only then inspect
the virtual machine. In such case, you can use the breakpoint()
function in
your test script and run the test-driver without the --interactive
flag:
# shortened example ./tests/hello-world-server.nix from above
(import ./lib.nix) {
# ...
testScript = ''
start_all()
node1.wait_for_unit("hello-world-server")
output = node1.succeed("curl localhost:8000/index.html")
# The test will stop at this line, giving you control over execution.
breakpoint()
assert "Hello world" in output, f"'{output}' does not contain 'Hello world'"
'';
}
Here, we stopped the test flow and are looking at the value of output
and
checking the status of the module with systemctl
.
$ nix build .#checks.x86_64-linux.hello-world-server.driver
$ ./result/bin/nixos-test-driver
>>> print(output)
>>> node1.execute("systemctl status hello-world-server")
In this article, we showed how you can interactively execute NixOS tests for
easier troubleshooting and debugging. In short, you can do so using either the
--interactive
flag or breakpoints in your test script. In comparison to
running tests in a sandbox, you can get immediate feedback and code completion,
and look at the intermediate results.
By employing these techniques, you can improve the quality and reliability of your NixOS modules and ensure that they are functioning correctly.
]]>With NixOS testing framework, you can create end-to-end integration tests easily. It all comes down to starting a virtual machine based on your custom modules and testing its state with a Python script. This way, you can identify in advance all the regressions and incompatible configurations arising from the updates you introduced.
One of the framework’s upsides is that it’s extremely fast — maybe the fastest of its kind: setting up VMs and running tests does not take much time thanks to sharing files with the nix store on the host.
But previously, there was no stable API to import the testing framework into projects, therefore it was hard to test anything that’s outside NixOS. The situation has changed thanks to Robert Hensing, who [created a new modular interface] for testing.
But there’s still a problem with documentation. Of course, you can refer to the corresponding manual chapter to explore NixOS testing framework. But many topics aren’t explained in detail, so I decided to write a brief intro to testing NixOS modules with flakes.
Let me give you some info on how tests are executed, and how to incorporate them into your project. If you’re new to NixOS, this info may be helpful.
So, how are tests executed in NixOS? To verify that the flake can be evaluated successfully, we run the flake check command. Under the hood, nix will run the so-called test driver in its own build sandbox. The test driver provides an API for the test script to setup virtual machines. When the VMs are ready, a series of tests are executed to check if NixOS modules are functioning as intended.
That’s a very broad outlook on how tests work. But how do you write tests?
First, if you are testing a module outside NixOS, i.e. in your own project, you
have to import nixpkgs
, the biggest repository of Nix packages where the
testing library is located.
There are several ways to import nixpkgs
in your code. One way is via
fetchTarball
:
{
nixpkgs = fetchTarball "https://github.com/NixOS/nixpkgs/archive/....tar.gz";
pkgs = import nixpkgs {};
}
But fetchTarball
is a builtin, which means that nixpkgs
will be downloaded
during evaluation. Another way is to load nixpkgs
using a
flake. It’s more convenient, because this way
you can update the dependencies easily. I’ll use this approach in my example.
Let’s move to the coding part now.
As an example, I’ll take a simple project that runs a web server returning a “Hello world!” string. First, let’s specify the flake:
# flake.nix
{
inputs.nixpkgs.url = "github:nixos/nixpkgs/nixpkgs-unstable";
outputs = { self, nixpkgs, ...}: {
nixosModules.hello-world-server = import ./hello-world-server.nix {};
};
}
This flake exposes the module ./hello-world-server.nix
. You can find the file
in the repository here. What it
does is it creates a simple HTML page and starts a server on the port 8000. The
correct behavior would be if the module returns a “Hello world!” string. Any
other output will be incorrect.
Now that we have our flake and module, we can write a test to check if we can reach the server.
But before that, we will create a helper function in ./tests/lib.nix
, which
will import the testing framework from nixpkgs. Extending specialArgs
will
allow us to pass through any flake inputs and outputs.
# tests/lib.nix
# The first argument to this function is the test module itself
test:
# These arguments are provided by `flake.nix` on import, see checkArgs
{ pkgs, self}:
let
inherit (pkgs) lib;
# this imports the nixos library that contains our testing framework
nixos-lib = import (pkgs.path + "/nixos/lib") {};
in
(nixos-lib.runTest {
hostPkgs = pkgs;
# This speeds up the evaluation by skipping evaluating documentation (optional)
defaults.documentation.enable = lib.mkDefault false;
# This makes `self` available in the NixOS configuration of our virtual machines.
# This is useful for referencing modules or packages from your own flake
# as well as importing from other flakes.
node.specialArgs = { inherit self; };
imports = [ test ];
}).config.result
You can use this helper function across different NixOS tests in your project.
Now, let’s create the test:
# ./tests/hello-world-server.nix
(import ./lib.nix) {
name = "from-nixos";
nodes = {
# `self` here is set by using specialArgs in `lib.nix`
node1 = { self, pkgs, ... }: {
imports = [ self.nixosModules.hello-world-server ];
environment.systemPackages = [ pkgs.curl ];
};
};
# This is the test code that will check if our service is running correctly:
testScript = ''
start_all()
# wait for our service to start
node1.wait_for_unit("hello-world-server")
node1.wait_for_open_port(8000)
output = node1.succeed("curl localhost:8000/index.html")
# Check if our webserver returns the expected result
assert "Hello world" in output, f"'{output}' does not contain 'Hello world'"
'';
}
To expose the test in our flake, we will import it in the checks output in the
flake.nix
file. This will make the test run when you execute the
nix flake check -L
command.
# flake.nix
{
inputs.nixpkgs.url = "github:nixos/nixpkgs/nixpkgs-unstable";
outputs = { self, nixpkgs, ...}: let
# expose systems for `x86_64-linux` and `aarch64-linux`
forAllSystems = nixpkgs.lib.genAttrs [ "x86_64-linux" "aarch64-linux" ];
in {
nixosModules.hello-world-server = import ./hello-world-server.nix;
checks = forAllSystems (system: let
checkArgs = {
# reference to nixpkgs for the current system
pkgs = nixpkgs.legacyPackages.${system};
# this gives us a reference to our flake but also all flake inputs
inherit self;
};
in {
# import our test
hello-world-server = import ./tests/hello-world-server.nix checkArgs;
});
};
}
Now that we have our nixos module, we can write a nixos test to check if we can
reach the “hello world” application. To expose the test in our flake, we will
add an attribute under the checks
output in the flake.nix
file. This will
make the test run when you execute the nix flake check -L
command. The test
uses the hello-world-server nixos module and checks if the application can be
reached.
# flake.nix
{
inputs.nixpkgs.url = "github:nixos/nixpkgs/nixpkgs-unstable";
outputs = { self, nixpkgs, ...}: let
# expose systems for `x86_64-linux` and `aarch64-linux`
forAllSystems = nixpkgs.lib.genAttrs [ "x86_64-linux" "aarch64-linux" ];
in {
nixosModules.hello-world-server = import ./hello-world-server.nix;
checks = forAllSystems (system: let
checkArgs = {
# reference to nixpkgs for the current system
pkgs = nixpkgs.legacyPackages.${system};
# this gives us a reference to our flake but also all flake inputs
inherit self;
};
in {
# import our test
hello-world-server = import ./tests/hello-world-server.nix checkArgs;
});
};
}
To verify that everything works as expected, run:
$ nix flake check -L
The -L parameter here tells the testing framework to print all logs that occur during the test, making it easier to follow.
start all VLans
...
start all VMs
...
node1: waiting for unit hello-world-server
node1: waiting for the VM to finish booting
...
(finished: waiting for unit hello-world-server, in 7.02 seconds)
node1: must succeed: curl localhost:8000/index.html
node1 # % Total % Received % Xferd Average Speed Time Time Time Current
node1 # Dload Upload Total Spent Left Speed
node1 # 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0[ 6.668081] hello-world-server[824]: 127.0.0.1 - - [08/Jan/2023 19:59:47] "GET /index.html HTTP/1
.1" 200 -
node1 # 100 87 100 87 0 0 4034 0 --:--:-- --:--:-- --:--:-- 4350
(finished: must succeed: curl localhost:8000/index.html, in 0.07 seconds)
(finished: run the VM test script, in 7.15 seconds)
test script finished in 7.18s
...
Here, the testing framework creates a virtual network and a virtual machine with our module in it, then it waits for the hello-world-server to start and checks if its output is valid. Here, the output is “Hello world!”, so we passed the test.
Now our hello-world-server NixOS module has a proper test!
In this article, we explained how you can leverage the NixOS testing framework
for your projects while importing the nixpkgs repository. In particular, we
defined a NixOS test in a flake and exposed it through the checks output, making
it run when executing the nix flake check -L
command.
But often you need to run your tests interactively to check the debug output and gain more insight into why a test isn’t behaving the way you expected. That’s what I explore in a twin article.
]]>In this article, I will discuss the technical issue of running pre-compiled executables on NixOS, and how we can improve the user experience by making these binaries work seamlessly using nix-ld.
One of the key benefits of NixOS is its focus on purity and reproducibility. The operating system is designed to ensure that the system configuration and installed software are always in a known and predictable state. This is achieved through the use of the Nix package manager, which allows users to declaratively specify their system configuration and software dependencies.
However, this focus on purity can make it difficult for users to run pre-compiled executables that were not specifically designed for NixOS. These executables may have dependencies on libraries that are not available in the Nix package manager, or may require patching or modification to work correctly on the operating system.
If you have used NixOS for a while, you may have encountered an issue when attempting to run a pre-compiled executable. You probably saw something like this:
$ ./masterpdfeditor5
bash: ./masterpdfeditor5: No such file or directory
However, the file clearly exists:
$ ls -la ./masterpdfeditor5
-rwxr-xr-x 1 joerg users 27160344 Jul 4 16:22 ./masterpdfeditor5
To understand what is going on, we need to look at what happens when an executable is run on a Linux operating system. When the shell attempts to run a program, it uses an execve system call to request the operating system to run the program. We can use the tool strace to visualize this:
$ strace -f ./masterpdfeditor5
execve("./masterpdfeditor5", ["./masterpdfeditor5"], 0x7fff70350ef8 /* 188 vars */) = -1 ENOENT (No such file or directory)
strace: exec: No such file or directory
+++ exited with 1 +++
Strace
prints out the system call and its arguments, as well as the return
code from the operating system. In this case, we can see that bash derived its
error message (No such file or directory
) from the execve
system call.
To understand why the operating system is reporting this error, we need to analyze the executable file further. The file command from the binutils package provides more information about the executable file:
$ file ./masterpdfeditor5
masterpdfeditor5: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=406f865023e33cc6a0f9d179cc14a939c4b29fbe, stripped
We can see that the executable is a dynamically linked ELF binary that depends
on libraries found on the system to function. It uses a link-loader program,
also known as an interpreter
to locate and load these libraries.
Commonly these programs are provided in your system libc, which in most cases is
glibc, and are in a fixed location
(/lib64/ld-linux-x86-64.so.2
if your CPU is x86-based).
On NixOS, the issue with running pre-compiled executables arises because it
allows users to mix different libraries, including the glibc package. Unlike
Linux, it does not provide a fixed path such as /lib64/ld-linux-x86-64.so.2
for the link-loader program. Executables packaged with Nix are linked against a
specific version of glibc. The patchelf
command can be used to find out exactly which version is being used.
$ patchelf --print-interpreter /run/current-system/sw/bin/ls
/nix/store/ayfr5l52xkqqjn3n4h9jfacgnchz1z7s-glibc-2.35-224/lib/ld-linux-x86-64.so.2
When the operating system tries to run an executable, it parses the binary and
looks for the specified link-loader. If it cannot find it, it returns the
generic error code ENOENT
, which results in an unhelpful error message.
To work around this issue when packaging programs that do not have the source
code available, such as masterpdfeditor
, Nix uses a build function called
autoPatchelfHook
to analyze the binary and resolve any missing dependencies.
This function rewrites the interpreter path /lib64/ld-linux-x86-64.so.2
to a
specific version of the glibc package, and populates the RPATH field in the
executable with paths to all necessary libraries for the program to run. The
link-loader uses this field to locate the libraries at runtime.
We can use the patchelf program to see the
effect of autoPatchelfHook
on the masterpdfeditor
program. By using
nix-shell
to load a shell with masterpdfeditor and then printing the RPATH of
the program, we can see the paths to the necessary libraries encoded in the
program.
First, we load up a shell with masterpdfeditor in it.
$ nix-shell -p masterpdfeditor
Next, we get the nix path to the program
[nix-shell]$ which masterpdfeditor5
/nix/store/zmdjwbizg4a6cja4darcn2qy9imr336k-masterpdfeditor-5.8.70/bin/masterpdfeditor5
The next command prints the RPATH encoded in the program.
[nix-shell]$ patchelf --print-rpath "/nix/store/zmdjwbizg4a6cja4darcn2qy9imr336k-masterpdfeditor-5.8.70/bin/.masterpdfeditor5-wrapped"
It gives this result:
/nix/store/y4k2206qhks30wspxx1nkmgfqfdmxp0j-sane-backends-1.1.1/lib:/nix/store/zaflwh2nwzj1f0wngd7hqm3nvlf3yhsx-zlib-1.2.13/lib:/nix/store/dgxn688wq7whsvs2fycygq0wn888xnsv-qtsvg-5.15.7/lib:/nix/store/9lcgwnc70f4wj1czklczql7a
wcv24mi-qtbase-5.15.7/lib:/nix/store/lgfp5762m5qzby9syd21kj04l5qmjg4h-qtdeclarative-5.15.7/lib:/nix/store/ykjcsxdh9c1w664g6v38d86gph8m6mq7-libglvnd-1.5.0/lib:/nix/store/wprxx5zkkk13hpj6k1v6qadjylh3vq9m-gcc-11.3.0-lib/lib
While autoPatchelfHook
is a useful tool for making many programs usable in
Nix, there are a few cases where it may not be possible or practical to use it.
These include:
To address these cases, nix-ld was created as
an alternative to autoPatchelfHook
. It allows users to run pre-compiled
executables on NixOS without the need to modify the binaries or copy them to the
Nix store. This improves the user experience by allowing users to easily run
binaries downloaded from third-party sources and proprietary software without
patching or modification.
It is installed in the same location as the link-loader on other Linux
distributions (i.e. /lib64/ld-linux-x86-64.so.2
), and it loads the actual
link-loader as specified in the NIX_LD
environment variable. It also accepts a
comma-separated list of library lookup paths in NIX_LD_LIBRARY_PATH
and
rewrites this variable to LD_LIBRARY_PATH
before passing execution to the
link-loader. This allows users to specify additional libraries that the
executable needs to run.
On a system configured with nix-ld
, the error message when attempting to run
an unpatched binary will be more informative and provide guidance on how to
address the issue:
$ ./masterpdfeditor5
cannot execute ./masterpdfeditor5: You are trying to run an unpatched binary on nixos, but you have not configured NIX_LD or NIX_LD_x86_64-linux. See https://github.com/Mic92/nix-ld for more details
To further improve the user experience, a new feature is available in the latest unstable version of NixOS and the upcoming 23.05 release. It allows the most common libraries to be included in the NixOs configuration as follows:
{ config, pkgs, ... }: {
# Enable nix ld
programs.nix-ld.enable = true;
# Sets up all the libraries to load
programs.nix-ld.libraries = with pkgs; [
stdenv.cc.cc
zlib
fuse3
icu
zlib
nss
openssl
curl
expat
# ...
];
}
For a more extensive version of this configuration, see my dotfiles.
By including the most common libraries in the configuration, nix-ld can provide a more seamless experience for users running pre-compiled executables on NixOS. They will not need to manually specify the necessary libraries for each executable and can simply run them as they would on other Linux distributions.
In conclusion, nix-ld is a useful tool for running pre-compiled executables on NixOS without the need for patching or modification. It provides a shim layer that allows users to specify the necessary libraries for each executable and improves the user experience by allowing users to easily run binaries from third-party sources and proprietary software. By including the most common libraries in the NixOS configuration, nix-ld can provide an even more seamless experience for running pre-compiled executables on NixOS.
In my next article, I’ll be looking at a similar issue to the one encountered when working with executable binaries. Scripts that are hardcoded to point to /usr/bin can also cause a problem on NixOS, and I will address this by introducing envfs
]]>Commonly, Linux distributions put their kernel sources in /usr/src
and their
kernel modules in /lib/modules/$(uname -r)
. Like always, NixOS is a special
snowflake, but once you get to learn the mechanics, it is actually quite
pleasant to use.
In the NixOS configuration, the kernel is defined via the boot.kernelPackages
option. The former also defines all out-of-tree kernel modules and other
packages that have the kernel as a build dependency. So, to access the kernel
only, you should look into boot.kernelPackages.kernel
.
Now that you are familiar with the topic, let’s proceed to building kernel modules. This article will guide you through the following steps:
Let’s say you have your NixOS configured in flake.nix
like this:
{
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable-small";
outputs = { self, nixpkgs }: {
nixosConfigurations = {
my-nixos = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [ ./configuration.nix ];
};
};
};
}
Let’s assume your NixOS flake is in /etc/nixos
. To get a development shell
that has all the required dependencies for building a kernel and kernel modules,
you can run the command below. It will add a C compiler and some libraries
needed for compiling to your shell.
$ nix develop "/etc/nixos#nixosConfigurations.my-nixos.config.boot.kernel"
Apart from the shell, we will also need the kernel development headers to build
a kernel module. They can be found in boot.kernelPackages.kernel.dev
.
Let’s clone an example kernel module and build it:
nix-shell> KERNELDIR=$(nix build --print-out-paths "/etc/nixos/#nixosConfigurations.turingmachine.config.boot.kernelPackages.kernel.dev")
nix-shell> git clone https://github.com/Mic92/uptime_hack/
nix-shell> cd uptime_hack
nix-shell> make -C $KERNELDIR/lib/modules/*/build M=$(pwd)
make: Entering directory '/nix/store/i7ph759bmlgrlkbz4dj5bjbbq47gx5nw-linux-6.0.12-dev/lib/modules/6.0.12/build'
CC [M] /home/joerg/git/uptime_hack/uptime_hack.o
MODPOST /home/joerg/git/uptime_hack/Module.symvers
CC [M] /home/joerg/git/uptime_hack/uptime_hack.mod.o
LD [M] /home/joerg/git/uptime_hack/uptime_hack.ko
BTF [M] /home/joerg/git/uptime_hack/uptime_hack.ko
Skipping BTF generation for /home/joerg/git/uptime_hack/uptime_hack.ko due to unavailability of vmlinux
make: Leaving directory '/nix/store/i7ph759bmlgrlkbz4dj5bjbbq47gx5nw-linux-6.0.12-dev/lib/modules/6.0.12/build'
We can also use this algorithm to build in-tree kernel drivers.
Next, we’ll need to unpack the current kernel source and copy the kernel
configuration file to our unpacked Linux tree. The current kernel source is
stored in $src
in the shell provided by nix develop
. We can unpack the
kernel like this:
$ tar -xvf "$src"
$ cd linux-*
Then, the Linux kernel configuration is stored in .config
. We can copy this
file from the kernel.dev package to our unpacked Linux tree:
$ cp $KERNELDIR/lib/modules/*/build/.config .config
Next, we will compile the kernel modules. But before, we need to prepare the build environment for building kernel modules:
$ make scripts prepare modules_prepare
Now, let’s build the new null_blk
block device driver like this:
$ make -C . M=drivers/block/null_blk
If we actually want to insert any of those drivers into the running system, the kernel in the NixOS configuration needs to be the same as the kernel of the booted system. So, it makes sense to check and compare the kernel versions, which you can do like this
$ nix build --print-out-paths "/etc/nixos/#nixosConfigurations.my-nixos.config.boot.kernelPackages.kernel"
/nix/store/yyz5jkjsan9q7v8aa4i7697rrivzwmjz-linux-6.0.12
$ realpath /run/booted-system/kernel
/nix/store/yyz5jkjsan9q7v8aa4i7697rrivzwmjz-linux-6.0.12/bzImage
In this case, the paths match because I have not updated my Linux kernel since I rebooted.
However, there is an even better way to replace the drivers with the new ones: by adding a symlink of our NixOS flake to our NixOS system. This way, we will always be able to refer to the flake at boot time.
How can you make NixOS closure contain a symlink to its own configuration flake?
By adding extra lines to system.extraSystemBuilderCmds
like this:
{
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable-small";
outputs = { self, nixpkgs }: {
nixosConfigurations = {
my-nixos = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
./configuration.nix
# This will add a symlink in your nixos closure
{
system.extraSystemBuilderCmds = ''
ln -s ${self} $out/flake
'';
}
];
};
};
};
}
After a reboot, we can check the symlink was added by looking at
/run/booted-system/flake
:
$ ls -la /run/booted-system/flake
lrwxrwxrwx 2 root root 50 Jan 1 1970 /run/booted-system/flake -> /nix/store/mpqvkfdn46c8b3sd4zcg2fm0y4nsya8v-source
Now you can refer to your NixOS configuration like this…
$ nix develop "$(realpath /run/booted-system/flake)#nixosConfigurations.$(hostname).config.boot.kernelPackages.kernel"
… and never have to wonder if your system is still in sync with your configuration.
Because things in NixOS are different from what we are used to in regular Linux distributions, hacking a kernel needs some special attention. In this tutorial, I shared my experience of hacking the NixOS kernel.
For quicker iterations on building kernels, also check out the nixos wiki article that describes how to debug the Linux kernel with Qemu in NixOS.
]]>Last week I was setting up this RISCv-based HiFive Unmatched board[1] with NixOS. Thanks to zhaofengli this was actually pretty straight forward given that his repository contained a full walk-through, images and a binary cache. So instead of spending the NixOS Munich Meetup hacking on this architecture, I had time to go further.
One of the thing that becomes quickly apparent while hacking on the board is that although the board is quite beefy with 16GB of RAM and NVME, it cannot keep up with up-to-date x86 machines. This is where cross-compiling NixOS helps.
In this article I will show you how to use NixOS on a host x86_64 machine to debug and cross-deploy another NixOS machine. And iterate faster doing so.
We’re going to do this with the following steps:
First we need to find out the architecture we want to build on and the
architecture to build for. The easiest way to find out is using nix repl
$ nix repl '<nixpkgs>'
repl> pkgs.system # This is our build architecture
"x86_64-linux"
# use tab completion to find the architecture you want to build for
repl> pkgsCross.<TAB>
We are interested in pkgsCross.<arch>.system
here. For my board this looks
like this:
repl> pkgsCross.riscv64.system
"riscv64-linux"
With information we can define the cross-compiled variant of our nixos machine:
{
inputs.nixpkgs.url = "github:NixOS/nixpkgs/release-22.11";
outputs = { self, nixpkgs }: {
nixosConfigurations = {
# Native machine build
my-nixos = nixpkgs.lib.nixosSystem {
system = "riscv64-linux";
modules = [ ./configuration.nix ];
};
# Cross machine build, from x86_64
my-nixos-from-x86_64 = nixpkgs.lib.nixosSystem {
modules = [
./configuration.nix
{
# This is the architecture we build from (pkgs.system from above)
nixpkgs.buildPlatform = "x86_64-linux";
# pkgsCross.<yourtarget>.system
nixpkgs.hostPlatform = "riscv64-linux";
}
];
};
};
};
}
Now that we have this extended flake configuration, deploying the new system closures to the board becomes easy:
$ nixos-rebuild switch \
--fast \
--build-host localhost \
--target-host $target_host \
--flake .#my-nixos-from-x86_64
nixos-rebuild will (1) build the system on the host machine, and then (2) copy
the build result onto the board, and finally (3) atomically switch the
configuration. The --fast
flag here is crucial since it stops nixos-rebuild
from using the riscv build of nix on the x86_64 machine.
While many packages cross-compile out-of-the box a few packages are not aware of cross compiling and try to execute binaries they just have built on the same machine. Since it sometimes not feasiable to fix this issues easily, one trick is to set platform emulation support based binfmt_misc and qemu. This allows to run the binaries directly on the NixOS host that are actually compiled for a different architecture.
It also allows to test and run binaries without having to copy them over to the target machine.
In order to do that, extend the host NixOS configuration
{
boot.binfmt.emulatedSystems = [
"riscv64-linux"
];
}
Cross-compiling has made good progress over the years. While still not a first-class citizen in nixpkgs is now in a usable state for deploying nixos systems. This helps a lot to get NixOS on little computers and port Nix to new architecture are not covered by official hydra builds.
]]>$ mkdir -p ~/.local/bin
$ ln -s /bin/bash ~/.local/bin/sh
Thanks to ole2 for providing this solution in the xilinx forum, which you can find here: forum post
As he points out, this seem to be a bug in Vivado. Vivado seems to call a script with #!/bin/sh and expects a bash to be executed. But for Ubuntu, /bin/sh points to /bin/dash per default. An alternative solution is to re-configure this link using:
$ sudo dpkg-reconfigure dash
I had this issue in Vivado 2016.4 and Ubuntu 16.04 LTS.
]]>The first thing to do, is install fastboot and adb on your PC/Mac. Make sure that you have enabled the development option on your android device and are able to connect to it via adb.
Then place the update, you want to install on the sdcard on your device. In case you want to install the root patch, you can download the latest SuperSU. Note that you will be not able to install custom roms, if your bootloader is locked. If the signature mismatch it will refuse to boot.
The next thing to do is to download and extract IntelAndroid-FBRL-07-24-2015.7z mentioned in the post. It contains a recovery images for CWM or TWRP and some custom trigger code to start a temporary CWM Recovery Session on the device. After reboot this session will be gone. But you can apply updates during the session such as SuperSU. You will not be able to follow the exact instructions from this forum post, because it contains a windows specific batch file and windows executables. However these are just fancy wrappers around adb and fastboot, so you can still use the contained images/launch code.
To reboot your device into the bootloader, connect it to your computer and run, while it is turned on:
$ adb reboot-bootloader
Within the boot loader, we will first put the alternate rescue image on the device along with some custom launcher code. I first tried TWRP on my device, but my touchscreen didn’t work with it, so I sticked to CWM:
# assuming you have changed to the directory of extracted archive:
$ fastboot flash /tmp/recovery.zip FB_RecoveryLauncher/cwm.zip
$ fastboot flash /tmp/recovery.launcher FB_RecoveryLauncher/recovery.launcher
The next thing to do is to trigger the device via fastboot to start our recovery. The forum post contained 4 alternatives approaches based on the android device. The following (T4) was working for me:
$ fastboot oem start_partitioning; fastboot flash /system/bin/logcat FB_RecoveryLauncher/fbrl.trigger; fastboot oem stop_partitioning
This temporary replace logcat with a launcher. It is important to execute all commands in one shot. Otherwise fastboot will fail to flash logcat.
If the command will not work for you, you could one of these commands:
# T1
$ fastboot flash /sbin/adbd FB_RecoveryLauncher/fbrl.trigger; fastboot oem startftm
# T2
$ fastboot flash /system/bin/cp FB_RecoveryLauncher/fbrl.trigger; fastboot oem backup_factory
# T3
$ fastboot flash /sbin/partlink FB_RecoveryLauncher/fbrl.trigger; fastboot oem stop_partitioning
If everything works it should start the recovery image.
]]>What you can do, is using socket-unit in systemd, which is waiting on a tcp port for connections and starts the service, if somebody is requesting it.
The systemd configuration could look like this:
[Unit]
Description=Start update on demand
[Socket]
ListenStream=3000
# only listen on localhost
#ListenStream=127.0.0.1:3000
BindIPv6Only=both
[Install]
WantedBy=multi-user.target
[Unit]
Description=Start update on demand
JobTimeoutSec=5min
[Service]
User=nobody
ExecStart=/usr/bin/python /path/to/script.py
In your python code, do the following
def systemd_socket_response():
"""
Accepts every connection of the listen socket provided by systemd, send the
HTTP Response 'OK' back.
"""
try:
from systemd.daemon import listen_fds;
fds = listen_fds()
except ImportError:
fds = [3]
for fd in fds:
import socket
sock = socket.fromfd(fd, socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(0)
try:
while True:
conn, addr = sock.accept()
conn.sendall(b"HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\nContent-Length: 3\r\n\r\nOK\n")
except socket.timeout:
pass
except OSError as e:
# Connection closed again? Don't care, we just do our job.
print(e)
if __name__ == "__main__":
if os.environ.get("LISTEN_FDS", None) != None:
systemd_socket_response()
# here your own code begins
do_work()
This still lacks of authentication and does not take any arguments. You could protect this port using a frontend webserver with http authentication, or you pass the listen socket to an python http server, which add some token passed authentication. Systemd will ensure, that your service will not run more than once at the time.
]]>[Service]
KillMode=process
You can just append the content of /etc/rc.digitalocean.d/droplet.conf
to your
/etc/rc.conf
In my case the public ipv4 address is 188.166.0.1
and my first
ipv6 address is 2a03:b0c0:2:d0::2a5:f001
.
defaultrouter="188.166.0.1"
# ipv6 address are shortened for readability
ipv6_defaultrouter="2a03:b0c0:2:d0::1"
ifconfig_vtnet0="inet 188.166.16.37 netmask 255.255.192.0"
ifconfig_vtnet0_ipv6="inet6 2a03:b0c0:2:d0::2a5:f001 prefixlen 64"
Digitalocean provides these days for native Ipv6 for the most of its
datacenters. Unlike other hoster they are very spare, when distributing Ipv6
Addresses and only route 16 addresses per droplet
(xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxx1 until xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxf).
To make use of these additional ip addresses they have to be assigned to your
network interface vtnet0
:
ifconfig_vtnet0_aliases="\
inet6 2a03:b0c0:2:d0::2a5:f002 prefixlen 64 \
inet6 2a03:b0c0:2:d0::2a5:f003 prefixlen 64 \
inet6 2a03:b0c0:2:d0::2a5:f004 prefixlen 64 \
inet6 2a03:b0c0:2:d0::2a5:f005 prefixlen 64 \
inet6 2a03:b0c0:2:d0::2a5:f006 prefixlen 64 \
inet6 2a03:b0c0:2:d0::2a5:f007 prefixlen 64 \
inet6 2a03:b0c0:2:d0::2a5:f008 prefixlen 64 \
inet6 2a03:b0c0:2:d0::2a5:f009 prefixlen 64 \
inet6 2a03:b0c0:2:d0::2a5:f00a prefixlen 64 \
inet6 2a03:b0c0:2:d0::2a5:f00b prefixlen 64 \
inet6 2a03:b0c0:2:d0::2a5:f00c prefixlen 64 \
inet6 2a03:b0c0:2:d0::2a5:f00d prefixlen 64 \
inet6 2a03:b0c0:2:d0::2a5:f00e prefixlen 64 \
inet6 2a03:b0c0:2:d0::2a5:f00f prefixlen 64"
In case you want to add freebsd jails later on, it is a good idea to allocate private ipv4 addresses for these too. In my case I generated as many ipv4 address as ipv6 addresses I got:
cloned_interfaces="${cloned_interfaces} lo1"
ifconfig_lo1_aliases="\
inet 192.168.67.1/24 \
inet 192.168.67.2/24 \
inet 192.168.67.3/24 \
inet 192.168.67.4/24 \
inet 192.168.67.5/24 \
inet 192.168.67.6/24 \
inet 192.168.67.7/24 \
inet 192.168.67.8/24 \
inet 192.168.67.9/24 \
inet 192.168.67.10/24 \
inet 192.168.67.11/24 \
inet 192.168.67.12/24 \
inet 192.168.67.13/24 \
inet 192.168.67.14/24 \
inet 192.168.67.15/24"
To apply these network settings immediately issue the following commands in series:
$ sudo service netif restart; sudo /etc/rc.d/routing restart
The second command is important because it adds the ipv4 gateway back. Otherwise you will not reach your droplet via ipv4 without rebooting.
If everything still works, you can remove, the following files leftover from cloudflare’s provisioning:
$ rm /etc/rc.d/digitalocean
$ rm -r /etc/rc.digitalocean.d
$ rm -r /usr/local/bsd-cloudinit/
$ pkg remove avahi-autoipd
ACTION=="add", SUBSYSTEM=="net", ATTR{dev_id}=="0x0", RUN+="/usr/bin/ip link set dev %k address XX:XX:XX:XX:XX:XX"
Replace XX:XX:XX:XX:XX:XX with your current mac address:
$ ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN group
default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP
group default qlen 1000
link/ether 02:8a:03:43:02:2a brd ff:ff:ff:ff:ff:ff
inet 192.168.1.56/24 brd 192.168.1.255 scope global eth0
inet6 fe80::8a:3ff:fe43:22a/64 scope link
valid_lft forever preferred_lft forever
inet6 fe80::9985:bd71:3b59:4875/64 scope link
valid_lft forever preferred_lft forever
which is 02:8a:03:43:02:2a
in my case.
Pry.config.commands.command "remove-pry", "Remove current pry" do
require 'pry/commands/edit/file_and_line_locator'
file_name, remove_line =
Pry::Command::Edit::FileAndLineLocator.from_binding(_pry_.current_binding)
temp_file = Tempfile.new('foo')
i = 0
File.foreach(file_name) do |line|
i += 1
if i == remove_line
line.gsub!(/binding.pry(\s)?/, "")
temp_file.write line unless line =~ /\A[[:space:]]*\z/
else
temp_file.write line
end
end
temp_file.close
FileUtils.cp(temp_file.path, file_name)
end
Usage
Before:
# ...
if foo == :bar
binding.pry
a_shiny_method
end
# ...
pry> remove-pry
After:
# ...
if foo == :bar
a_shiny_method
end
# ...
To do so add the following lines at the top of your ferm.conf:
domain ip {
table filter chain FORWARD {
outerface docker0 mod conntrack ctstate (RELATED ESTABLISHED) ACCEPT;
interface docker0 outerface !docker0 ACCEPT;
interface docker0 outerface docker0 ACCEPT;
}
table nat {
chain DOCKER;
chain PREROUTING {
mod addrtype dst-type LOCAL jump DOCKER;
}
chain OUTPUT {
daddr !127.0.0.0/8 mod addrtype dst-type LOCAL jump DOCKER;
}
chain POSTROUTING {
saddr 172.17.0.0/16 outerface !docker0 MASQUERADE;
}
}
}
In my case docker’s subnet is 172.17.0.0/16
and uses docker0
as bridge
device.
I’m the developer of OpenTraining, an open source Android app for fitness training. I recently looked for a possibility to add a simple feedback system to my app. There’s an open source framework for crash reports named ACRA that I decided to use for both crash reports and user feedback.
The Google Play Store offers a crash report system as well, but if you deploy your app on multiple app stores you might want a central instance for collecting crash reports. For user feedback many apps simply open an email-Intent but I don’t think this offers a good user experience.
This is how the user feedback dialog and the generated mail look like:
Advantages:
Disadvantages:
If your project is pretty large you should consider another ACRA-backend. I tried some of them, but as long as I get < 20 emails per week I’ll use the PHP backend.
This How-to is based on ACRA and ACRA-mailer.
The most important changes I had to apply to my project for adding the feedback-feature can be seen in this commit on GitHub (but there have been some more commits concerning ACRA).
If you have any problems with this step have a look at the ACRA documentation. There’s also a description for Gradle integration.
Create a new class that extends Application:
import org.acra.*;
import org.acra.annotation.*;
import android.app.Application;
@ReportsCrashes(
formKey = "" // This is required for backward compatibility but not used
)
public class YourApplication extends Application{
@Override
public void onCreate() {
super.onCreate();
// The following line triggers the initialization of ACRA
ACRA.init(this);
ACRA.getErrorReporter().setReportSender(new ACRACrashReportMailer()); // default crash report sender
}
}
Open the android manifest editor (AndroidManifest.xml)
Make sure that your application requests the permission ‘android.permission.INTERNET’.
I use 2 different implementations of ReportSender:
The crash reporter sends nearly all data that’s available, the feedback reporter sends the user message, the date and the app version. Add both to your project.
Remember to change the ‘BASE_URL’. Use HTTPS if your server supports it (mine doesn’t).
There are 2 PHP scripts as well:
You will also need the mail template. Change the destination email and add the files to the webspace/server of your choice (e.g. uberspace). If you want you can change the “shared_secret”, but remember to do this in the Java class as well.
Now you should have a try and test sending feedback to yourself:
ACRA.getErrorReporter().setReportSender(new ACRAFeedbackMailer());
ACRA.getErrorReporter().putCustomData("User message", "Some Text here");
ACRA.getErrorReporter().handleSilentException(new NullPointerException("Test"));
If this works you need a suitable spot for your user feedback. In most cases a dialog should be fine.
Consider to write your own class(es) that extend(s) Exception. Your PHP script could do further processing with this information.
As you have a server-side script it is very easy to change the formatting of the emails. Highlighting the user comments or the type of exception may be a good first step.
With the use of two different implementations of ReportSender it is also possible to use email only for sending feedback and send crash reports to another backend that is better suited for bug tracking. For larger projects this approach is recommended.
by Christian Skubich
eMail: christian@skubware.de
Twitter: @chaosbastler
]]>A guide to connect with a different machine using a ethernet cable for internet sharing or just transferring files:
Install dnsmasq and iproute2
$ pacman -S dnsmasq iproute2
Copy over the configuration files at the end of the article and edit the /etc/conf.d/share-internet@<device> to match your network setup. (where <device> is your network device)
Start the sharing service with systemd
$ sudo systemctl start internet-sharing@.service
After that the other machine can connect via dhcp. It will get an ipv4 address from the 10.20.0.0/24 subnet and a ipv6 address from the fd21:30c2:dd2f:: subnet. Your host will be reachable via 10.20.0.1 or fd21:30c2:dd2f::1. Thanks to ipv6 router advertising, an AAAA record for each host is automatically set based on the hostname. This means if your hostname is foo, all members of the network can just connect to it using the address foo. You should disable the share-internet.service, if you don’t need it. Otherwise you might mess up network setups, if you connect to a network with the device on which the dhcp service is running.
Happy networking!
# google as an upstream dns server
server=8.8.8.8
server=8.8.4.4
no-resolv
cache-size=2000
Ethernet to Wlan:
# Device which has internet access, ex: wlan0 or usb0
EXTERNAL_DEVICE="wlp3s0"
IP4_ADDRESS="10.20.0.1"
IP4_NETMASK="24"
IP4_SUBNET="10.20.0.2,10.20.0.255"
IP6_ADDRESS="fd21:30c2:dd2f::1"
IP6_NETMASK="64"
IP6_SUBNET="fd21:30c2:dd2f::"
Wlan to Ethernet:
If you have luck and your wifi driver is capable of the infrastructure mode, you should take a look at hostadp, in my case I have to create an adhoc network. To enable the adhoc network:
$ sudo systemctl enable wireless-adhoc@<device>.service
# Device which has internet access, ex: wlan0 or usb0
EXTERNAL_DEVICE="enp0s20u2"
IP4_ADDRESS="10.20.0.1"
IP4_NETMASK="24"
IP4_SUBNET="10.20.0.100,10.20.0.199"
IP6_ADDRESS="fd21:30c2:dd2f::1"
IP6_NETMASK="64"
IP6_SUBNET="fd21:30c2:dd2f::"
[Unit]
Description=Ad-hoc wireless network connectivity (%i)
Wants=network.target
Before=network.target
Conflicts=netctl-auto@.service
BindsTo=sys-subsystem-net-devices-%i.device
After=sys-subsystem-net-devices-%i.device
[Service]
Type=simple
ExecStartPre=/usr/bin/rfkill unblock wifi
ExecStart=/usr/sbin//wpa_supplicant -D nl80211,wext -c/etc/wpa_supplicant/wpa_supplicant-adhoc-%I.conf -i%I
[Install]
RequiredBy=share-internet@%i.service
ctrl_interface=DIR=/run/wpa_supplicant GROUP=wheel
# use 'ap_scan=2' on all devices connected to the network
ap_scan=2
network={
ssid="The.Secure.Network"
mode=1
frequency=2432
proto=WPA
key_mgmt=WPA-NONE
pairwise=NONE
group=TKIP
psk="fnord"
}
# MacOS X and Networmanager aren't capable of using WPA/WPA2 for Adhoc Networks
#network={
# ssid="The.Insecure.Network"
# mode=1
# frequency=2432
# proto=WPA
# key_mgmt=NONE
# pairwise=NONE
# group=TKIP
#
# wep_key0="fnord"
# wep_tx_keyidx=0
#}
$ sudo mkdir /opt/busybox/bin
$ busybox --list | xargs -n 1 -d "\n" -I "cmd" sudo ln -s $(which busybox) /opt/busybox/bin/cmd
In order to be able to login in a system, where the usual shell is broken, I added a new user called rescue.
$ useradd -m -s /opt/busybox/bin/ash rescue
Because origin passwd uses sha256 for password hashes, which busybox is not capable of by default you have to recreate every password, you plan to login, to make things like su work:
$ sudo busybox passwd -a 2 rescue # use sha1 instead of sha256
$ sudo busybox passwd -a 2 root
The login shell is set in this case to the one busybox provides. In order to be able to login via ssh this shell has to be added /etc/shells:
$ echo /opt/busybox/bin/ash | sudo tee -a /etc/shells
The last thing left, is to prepend the path with busybox symlinks, to the PATH variable of the rescue user, to use them instead of their coreutils equivalents.
$ echo 'export PATH=/opt/busybox/bin:$PATH' | sudo tee -a /home/rescue/.profile
]]>remove Lock = Caps_Lock
keysym Caps_Lock = Shift_L
add Shift = Shift_L
However these settings got sometimes lost. (ex: after the driver was reloaded after suspend). Finally I found event_key_remap patch from here, which allows to permanently redefine keys in the xorg.conf.
To apply the patch under archlinux simply install xf86-input-evdev-remap from AUR:
yaourt -S xf86-input-evdev-remap
To track down the key, you want to remap use xev
on the terminal. Just type
the wanted keys a few times. The output will be something like the following:
KeyRelease event, serial 33, synthetic NO, window 0x1e00001,
root 0x8e, subw 0x0, time 5672767, (611, 262), root:(613, 288),
state 0x1, keycode 50 (keysym 0xffe1, Shift_L), same_screen YES
XLookupString gives 0 bytes:
XFilterEvent returns: False
The interesting value here is the keycode
. Use this code to build your final
xorg.conf. In my case this was:
#/etc/X11/xorg.conf.d/10-kb-layout.conf
Section "InputClass"
Identifier "Keyboard Defaults"
MatchIsKeyboard "yes"
Option "XkbLayout" "de" # Replace this with your layout
Option "event_key_remap" "58=50" # Caps Lock Key = Shift
EndSection
When you have lots of requests in different areas of your project, you may want to have a global handling for failure events. For example how an Login View, if any of the requests gives you an 401 (Unauthorized) status code.
In RestKit 0.20 they introduced the opportunity to register your own
RKObjectRequestOperation
, which is the common way to do this.
So at first you create a subclass of RKObjectRequestOperation
, let’s call it
CustomRKObjectRequestOperation
#import "RKObjectRequestOperation.h"
@interface CustomRKObjectRequestOperation : RKObjectRequestOperation
@end
@implementation CustomRKObjectRequestOperation
- (void)setCompletionBlockWithSuccess:(void ( ^ ) ( RKObjectRequestOperation *operation , RKMappingResult *mappingResult ))success failure:(void ( ^ ) ( RKObjectRequestOperation *operation , NSError *error ))failure
{
[super setCompletionBlockWithSuccess:^void(RKObjectRequestOperation *operation , RKMappingResult *mappingResult) {
if (success) {
success(operation, mappingResult);
}
}failure:^void(RKObjectRequestOperation *operation , NSError *error) {
[[NSNotificationCenter defaultCenter] postNotificationName:@"connectionFailure" object:operation];
if (failure) {
failure(operation, error);
}
}];
}
@end
This is the point where we overwrite the method which sets the completion and
failure block. I use the Observer Pattern (NSNotificationCenter
) to notify
about connectionFailures.
(Learn more about NSNotificationCenter)
Of course we need to tell RestKit to use our custom RKObjectRequestOperation
class. You can do this by adding this line to you RestKit configuration:
[[RKObjectManager sharedManager] registerRequestOperationClass:[CustomRKObjectRequestOperation class]];
Now we need a class where we listen to the failure notifications. You can choose any of your class, I use the AppDelegate for this.
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(connectionFailedWithOperation:) name:@"connectionFailure" object:nil];
As you should know, the connectionFailedWithOperation:
is called when a
connection failure occurs.
- (void)connectionFailedWithOperation:(NSNotification *)notification
{
RKObjectRequestOperation *operation = (RKObjectRequestOperation *)notification.object;
if (operation) {
NSInteger statusCode = operation.HTTPRequestOperation.response.statusCode;
switch (statusCode) {
case 0: // No internet connection
{
}
break;
case 401: // not authenticated
{
}
break;
default:
{
}
break;
}
}
}
Links:
RestKit Framework
Class Documentation for RKObjectRequestOperation
by Albert Schulz
If you have any questions feel free to contact me:
eMail: mail@halfco.de
Twitter: @albert_sn
Web: halfco.de
One great feature of Mongodb is, that the first bytes of each ObjectID contains
the time, they were generated. This can be exploited to mimic the well known
created_at
field in rails. First put this file in your lib directory.
#lib/mongoid/created.rb
module Mongoid
module CreatedAt
# Returns the creation time calculated from ObjectID
#
# @return [ Date ] the creation time
def created_at
id.generation_time
end
# Set generation time of ObjectId.
# Note: This will modify the ObjectId and
# is therefor only useful for not persisted documents
#
# @return [ BSON::ObjectId ] the generated object id
def created_at=(date)
self.id = BSON::ObjectId.from_time(date)
end
end
end
If you are still using mongoid 3 replace BSON::ObjectId
with
Moped::BSON::ObjectId
.
Now you can include this module in every Model, where you need created at.
#app/models/user.rb
class User
include Mongoid::Document
include Mongoid::CreatedAt
# ...
end
u = User.new(created_at: 1.hour.ago)
u.created_at
That’s all easy enough, isn’t it?
]]>$ systemctl status reflector-update.service
reflector-update.service - "Update pacman's mirrorlist using reflector"
Loaded: loaded
(/etc/systemd/system/timer-weekly.target.wants/reflector-update.service)
Active: inactive (dead)
Jun 09 17:58:30 higgsboson reflector[30109]: rating http://www.gtlib.gatech.edu/pub/archlinux/
Jun 09 17:58:30 higgsboson reflector[30109]: rating rsync://rsync.gtlib.gatech.edu/archlinux/
Jun 09 17:58:30 higgsboson reflector[30109]: rating http://lug.mtu.edu/archlinux/
Jun 09 17:58:30 higgsboson reflector[30109]: Server Rate Time
...
IOSchedulingPriority
, Nice
or JobTimeoutSec
So let’s get it started. The first thing you might want to do, is to replace the default scripts located in the runparts directories /etc/cron.{daily,hourly,monthly,weekly}.
On my distribution (archlinux) these are logrotate, man-db, shadow and updatedb: For convenience I created a structure like /etc/cron.*:
$ mkdir /etc/systemd/system/timer-{hourly,daily,weekly}.target.wants
and added the following timer.
$ cd /etc/systemd/system
$ wget https://blog.thalheim.io/downloads/timers.tar
$ tar -xvf timers.tar && rm timers.tar
[Unit]
Description=Hourly Timer
[Timer]
OnBootSec=5min
OnUnitActiveSec=1h
Unit=timer-hourly.target
[Install]
WantedBy=basic.target
[Unit]
Description=Hourly Timer Target
StopWhenUnneeded=yes
[Unit]
Description=Daily Timer
[Timer]
OnBootSec=10min
OnUnitActiveSec=1d
Unit=timer-daily.target
[Install]
WantedBy=basic.target
[Unit]
Description=Daily Timer Target
StopWhenUnneeded=yes
[Unit]
Description=Weekly Timer
[Timer]
OnBootSec=15min
OnUnitActiveSec=1w
Unit=timer-weekly.target
[Install]
WantedBy=basic.target
[Unit]
Description=Weekly Timer Target
StopWhenUnneeded=yes
… and enable them:
$ systemctl enable timer-hourly.timer
$ systemctl enable timer-daily.timer
$ systemctl enable timer-weekly.timer
These directories work like their cron equivalents, each service file located in such a directory will be executed at the given time.
Now move on to the service files. If you’re not running Arch, the paths might be different on your system.
$ cd /etc/systemd/system
$ wget https://blog.higgsboson.tk/downloads/services.tar
$ tar -xvf services.tar && rm services.tar
[Unit]
Description=Update man-db
[Service]
Nice=19
IOSchedulingClass=2
IOSchedulingPriority=7
ExecStart=/usr/bin/logrotate /etc/logrotate.conf
[Unit]
Description=Update man-db
[Service]
Nice=19
IOSchedulingClass=2
IOSchedulingPriority=7
ExecStart=/usr/bin/mandb --quiet
[Unit]
Description=Update mlocate database
[Service]
Nice=19
IOSchedulingClass=2
IOSchedulingPriority=7
ExecStart=/usr/bin/updatedb
[Unit]
Description=Verify integrity of password and group files
[Service]
Type=oneshot
ExecStart=/usr/sbin/pwck -r
ExecStart=/usr/sbin/grpck -r
At last but not least you can disable cron:
$ systemctl stop cronie && systemctl disable cronie
If you want to execute at a special calendar events for example “every first day in a month” use the “OnCalendar=” option in the timer file. example:
[Unit]
Description=Daily Timer
[Timer]
OnCalendar=*-*-1 0:0:O
Unit=send-bill.target
[Install]
WantedBy=basic.target
That’s all for the moment. Have a good time using the power of systemd!
Below some service files, I use:
[Unit]
Description="Update pacman's mirrorlist using reflector"
[Service]
Nice=19
IOSchedulingClass=2
IOSchedulingPriority=7
Type=oneshot
ExecStart=/usr/bin/reflector --verbose -l 5 --sort rate --save /etc/pacman.d/mirrorlist
[Unit]
Description=Run pkgstats
[Service]
User=nobody
ExecStart=/usr/bin/pkgstats
See this link for details about my shell-based pacman notifier
[Unit]
Description=Update pacman's package cache
[Service]
Nice=19
Type=oneshot
IOSchedulingClass=2
IOSchedulingPriority=7
Environment=CHECKUPDATE_DB=/var/lib/pacman/checkupdate
ExecStartPre=/bin/sh -c "/usr/bin/checkupdates > /var/log/pacman-updates.log"
ExecStart=/usr/bin/pacman --sync --upgrades --downloadonly --noconfirm --dbpath=/var/lib/pacman/checkupdate
To get started you will need ruby on the backup machine. I prefer using rvm for this job. Feel free to choose your preferred way:
$ curl -L https://get.rvm.io | bash -s stable --autolibs=enabled
To create the backup, I use the great knife-backup gem of Marius Ducea:
$ gem install knife-backup
Then add these scripts to your system:
$ mkdir -p ~/bin && cd ~/bin
$ wget http://blog.higgsboson.tk/downloads/code/chef-backup/backup-chef.sh
$ wget http://blog.higgsboson.tk/downloads/code/chef-backup/restore-chef.sh
$ chmod +x {backup,restore}-chef.sh
#!/bin/bash
# optional: load rvm
source "$HOME/.rvm/scripts/rvm" || source "/usr/local/rvm/scripts/rvm"
cd /tmp
BACKUP=/path/to/your/backup #<--- EDIT THIS LINE
TMPDIR=/tmp/$(mktemp -d chef-backup-XXXX)
MAX_BACKUPS=8
cd $TMPDIR
trap "rm -rf '$TMPDIR'" INT QUIT TERM EXIT
knife --config $HOME/.chef/knife-backup.rb backup export -D . >/dev/null
tar -cjf "$BACKUP/$(date +%m.%d.%Y).tar.bz2" .
# keep the last X backups
ls -t "$BACKUP" | tail -n+$MAX_BACKUPS | xargs rm -f
#!/bin/bash
if [ "$#" -eq 0 ]; then
echo "USAGE: $0 /path/to/backup"
exit 1
fi
source "$HOME/.rvm/scripts/rvm" || source "/usr/local/rvm/scripts/rvm"
cd /tmp
TMPDIR=/tmp/$(mktemp -d chef-restore-XXXX)
cd "$TMPDIR"
trap "rm -rf '$TMPDIR'" INT QUIT TERM EXIT
tar xf $1
knife --config $HOME/.chef/knife-backup.rb backup restore -D .
Modify BACKUP variable to match your backup destination. Next you will need a knife.rb to get access to your server. I suggest to create a new client:
$ mkdir -p ~/.chef
$ knife client create backup --admin --file "$HOME/.chef/backup.pem"
$ cat <<'__EOF__' >> ~/.chef/knife-backup.rb
log_level :info
log_location STDOUT
node_name 'backup'
client_key "#{ENV["HOME"]}/.chef/backup.pem"
chef_server_url 'https://chef.yourdomain.tld' # EDIT HERE
syntax_check_cache_path "#{ENV["HOME"]}.chef/syntax_check_cache"
__EOF__
$ knife role list # test authentication
Now test the whole setup, by running the backup-chef.sh
script:
$ ~/bin/backup-chef.sh
It should create a tar file in the backup directory.
If everything works, you can add a cronjob to automate this.
$ crontab -e
@daily $HOME/bin/backup-chef.sh
To restore a backup simply run (where DATE
is the date of the backup)
$ ~/bin/restore-chef.sh /path/to/backup/DATE.tar.bz2
That’s all folks!
]]>2013/04/19 22:14:38 [error] 32402#0: *251 FastCGI sent in stderr: "Access to the
script '/var/www/cloud' has been denied (see security.limit_extensions)" while
reading response header from upstream, client: ::1, server:
cloud.higgsboson.tk, request: "GET /index.php HTTP/1.1", upstream:
"fastcgi://unix:/var/run/php-fpm.sock:", host: "cloud.higgsboson.tk"
The problem here was again a missing fastcgi_params option.
To solve the problem include the following line either in ‘/etc/nginx/fastcgi_params’
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# ...
or in the owncloud block in nginx.conf:
server {
listen 80;
server_name cloud.example.com;
return https://$server_name$request_uri; # enforce https
}
server {
listen 443 ssl;
server_name cloud.example.com;
ssl_certificate /etc/ssl/nginx/cloud.example.com.crt;
ssl_certificate_key /etc/ssl/nginx/cloud.example.com.key;
# Path to the root of your installation
root /var/www/;
client_max_body_size 10G; # set max upload size
fastcgi_buffers 64 4K;
rewrite ^/caldav(.*)$ /remote.php/caldav$1 redirect;
rewrite ^/carddav(.*)$ /remote.php/carddav$1 redirect;
rewrite ^/webdav(.*)$ /remote.php/webdav$1 redirect;
index index.php;
error_page 403 = /core/templates/403.php;
error_page 404 = /core/templates/404.php;
location ~ ^/(data|config|\.ht|db_structure\.xml|README) {
deny all;
}
location / {
# The following 2 rules are only needed with webfinger
rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json
last;
rewrite ^/.well-known/carddav /remote.php/carddav/ redirect;
rewrite ^/.well-known/caldav /remote.php/caldav/ redirect;
rewrite ^(/core/doc/[^\/]+/)$ $1/index.html;
try_files $uri $uri/ index.php;
}
location ~ ^(.+?\.php)(/.*)?$ {
try_files $1 = 404;
include fastcgi_params;
fastcgi_param PATH_INFO $2;
fastcgi_param HTTPS on;
# THIS LINE WAS ADDED
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass 127.0.0.1:9000;
# Or use unix-socket with 'fastcgi_pass unix:/var/run/php5-fpm.sock;'
}
# Optional: set long EXPIRES header on static assets
location ~* ^.+\.(jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
expires 30d;
# Optional: Don't log access to assets
access_log off;
}
}
After you signup up a hub, in my case higgsboson.superfeedr.com, you have to add a hub reference to your atom feed.
# ....
# pubsubhubbub
hub_url: http://higgsboson.superfeedr.com/ # <--- replace this with your hub
Insert this line:
{% raw %}
{% if site.hub_url %}<link href="{{ site.hub_url }}" rel="hub"/>{% endif %}
{% endraw %}
into source/atom.xml
. So it looks like this:
{% raw %}
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title><![CDATA[{{ site.title }}]]></title>
<link href="{{ site.url }}/atom.xml" rel="self"/>
<link href="{{ site.url }}/"/>
{% if site.hub_url %}<link href="{{ site.hub_url }}" rel="hub"/>{% endif %}
<updated>{{ site.time | date_to_xmlschema }}</updated>
<id>{{ site.url }}/</id>
<author>
<name><![CDATA[{{ site.author | strip_html }}]]></name>
{% if site.email %}<email><![CDATA[{{ site.email }}]]></email>{% endif %}
</author>
<generator uri="http://octopress.org/">Octopress</generator>
{% for post in site.posts limit: 20 %}
<entry>
<title type="html"><![CDATA[{{ post.title | cdata_escape }}]]></title>
<link href="{{ site.url }}{{ post.url }}"/>
<updated>{{ post.date | date_to_xmlschema }}</updated>
<id>{{ site.url }}{{ post.id }}</id>
<content type="html"><![CDATA[{{ post.content | expand_urls: site.url | cdata_escape }}]]></content>
</entry>
{% endfor %}
</feed>
{% endraw %}
To push out updates, you have to ping your hub, this is easily done in your deploy rake task.
Add these lines to the end of your deploy task in your Rakefile:
require 'net/http'
require 'uri'
hub_url = "higgsboson.superfeedr.com" # <--- replace this with your hub
atom_url = "http://blog.higgsboson.tk/atom.xml" # <--- replace this with your full feed url
resp, data = Net::HTTP.post_form(URI.parse(hub_url),
{'hub.mode' => 'publish',
'hub.url' => atom_url})
raise "!! Hub notification error: #{resp.code} #{resp.msg}, #{data}" unless resp.code == "204"
puts "## Notified hub (" + hub_url + ") that feed #{atom_url} has been updated"
So you end up with something like this:
desc "Default deploy task"
task :deploy do
# Check if preview posts exist, which should not be published
if File.exists?(".preview-mode")
puts "## Found posts in preview mode, regenerating files ..."
File.delete(".preview-mode")
Rake::Task[:generate].execute
end
Rake::Task[:copydot].invoke(source_dir, public_dir)
Rake::Task["#{deploy_default}"].execute
require 'net/http'
require 'uri'
hub_url = "higgsboson.superfeedr.com" # <--- replace this with your hub
atom_url = "http://blog.higgsboson.tk/atom.xml" # <--- replace this with your full feed url
resp, data = Net::HTTP.post_form(URI.parse(hub_url),
{'hub.mode' => 'publish',
'hub.url' => atom_url})
raise "!! Hub notification error: #{resp.code} #{resp.msg}, #{data}" unless resp.code == "204"
puts "## Notified hub (" + hub_url + ") that feed #{atom_url} has been updated"
end
Now whenever you run rake deploy
, it will automatically update your hub.
If you have a jabber or google talk account, you can easily verify your setup by adding push-bot to your contact list and subscribe to your feed.
]]> $ sudo add-apt-repository ppa:formorer/icinga
$ sudo add-apt-repository ppa:formorer/icinga-web
$ sudo apt-get update
# without --no-install-recommends, it will try to install apache
$ sudo apt-get --no-install-recommends install icinga-web
$ sudo apt-get install icinga-web-pnp # optional: for pnp4nagios
$ sudo apt-get install nginx php5-fpm # if not already installed
For php I just use php-fpm without a special configuration. If you installed
icinga from source, you have change the roots to match your installation path
(to /usr/local/icinga-web/
)
upstream fpm {
server unix:/var/run/php5-fpm.sock;
}
server {
listen 80;
listen 443 ssl;
# FIXME
server_name icinga.yourdomain.tld;
access_log /var/log/nginx/icinga.access.log;
error_log /var/log/nginx/icinga.error.log;
# FIXME
ssl_certificate /etc/ssl/private/icinga.yourdomain.tld.crt;
ssl_certificate_key /etc/ssl/private/icinga.yourdomain.tld.pem;
# Security - Basic configuration
location = /favicon.ico {
log_not_found off;
access_log off;
expires max;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# Deny access to hidden files
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
root /usr/share/icinga-web/pub;
location /icinga-web/styles {
alias /usr/share/icinga-web/pub/styles;
}
location /icinga-web/images {
alias /usr/share/icinga-web/pub/images;
}
location /icinga-web/js {
alias /usr/share/icinga-web/lib;
}
location /icinga-web/modules {
rewrite ^/icinga-web/(.*)$ /index.php?/$1 last;
}
location /icinga-web/web {
rewrite ^/icinga-web/(.*)$ /index.php?/$1 last;
}
#>>> configuration for pnp4nagios
location /pnp4nagios {
alias /usr/share/pnp4nagios/html;
}
location ~ ^(/pnp4nagios.*\.php)(.*)$ {
root /usr/share/pnp4nagios/html;
include fastcgi_params;
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param SCRIPT_FILENAME $document_root/index.php;
fastcgi_pass fpm;
}
#<<<
location / {
root /usr/share/icinga-web/pub;
index index.php;
location ~* ^/(robots.txt|static|images) {
break;
}
if ($uri !~ "^/(favicon.ico|robots.txt|static|index.php)") {
rewrite ^/([^?]*)$ /index.php?/$1 last;
}
}
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_split_path_info ^(/icinga-web)(/.*)$;
fastcgi_pass fpm;
fastcgi_index index.php;
include /etc/nginx/fastcgi_params;
}
}
The basic installation is pretty easy:
$ apt-get install systemd
Then you need to tell the kernel to use systemd as the init system:
To do so, append init=/bin/systemd
to the end of /boot/cmdline.txt
line
$ cat /boot/cmdline.txt
dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait init=/bin/systemd
If you reboot, systemd will be used instead of the default init script.
Currently debians version of systemd doesn’t ship many service files by default. Systemd will automatically fallback to the lsb script, if a service file for a daemon is missing. So the speedup isn’t as big as on other distributions such as archlinux or fedora, which provide a deeper integration.
To get a quick overview, which services are started nativly, type the following command:
$ systemctl list-units
All descriptions containing LSB:
are launched through lsb scripts.
Writing your own service files, is straight forward. If you add custom service files, put them in /etc/systemd/system, so they will not get overwritten by updates.
To get further information about systemd, I recommend the great archlinux wiki article.
At the end of this article, I provide some basic one, I use. I port them over
mostly from archlinux. In the most cases, i just have adjusted the path of the
binary to get them working. (from /usr/bin to /usr/sbin for ex.) It is
important, that the service name match with the initscript, so it will be used
instead by systemd. This will not work in all cases like dhcpcd which contains
the specific network device (like dhcpcd@eth0). In this case, you have to remove
origin service with update-rc.d
and enable the service file with
systemctl enable
.
Also available as gist:
# IMPORTANT: only works with dhcpcd5 not the old dhcpcd3!
[Unit]
Description=dhcpcd on %I
Wants=network.target
Before=network.target
[Service]
Type=forking
PIDFile=/run/dhcpcd-%I.pid
ExecStart=/sbin/dhcpcd -A -q -w %I
ExecStop=/sbin/dhcpcd -k %I
[Install]
Alias=multi-user.target.wants/dhcpcd@eth0.service
[Unit]
Description=Pro-active monitoring utility for unix systems
After=network.target
[Service]
Type=simple
ExecStart=/usr/bin/monit -I
ExecStop=/usr/bin/monit quit
ExecReload=/usr/bin/monit reload
[Install]
WantedBy=multi-user.target
[Unit]
Description=Network Time Service
After=network.target nss-lookup.target
[Service]
Type=forking
PrivateTmp=true
ExecStart=/usr/sbin/ntpd -g -u ntp:ntp
ControlGroup=cpu:/
[Install]
WantedBy=multi-user.target
[Unit]
Description=SSH Key Generation
ConditionPathExists=|!/etc/ssh/ssh_host_key
ConditionPathExists=|!/etc/ssh/ssh_host_key.pub
ConditionPathExists=|!/etc/ssh/ssh_host_ecdsa_key
ConditionPathExists=|!/etc/ssh/ssh_host_ecdsa_key.pub
ConditionPathExists=|!/etc/ssh/ssh_host_dsa_key
ConditionPathExists=|!/etc/ssh/ssh_host_dsa_key.pub
ConditionPathExists=|!/etc/ssh/ssh_host_rsa_key
ConditionPathExists=|!/etc/ssh/ssh_host_rsa_key.pub
[Service]
ExecStart=/usr/bin/ssh-keygen -A
Type=oneshot
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
[Unit]
Conflicts=ssh.service
[Socket]
ListenStream=22
Accept=yes
[Install]
WantedBy=sockets.target
[Unit]
Description=SSH Per-Connection Server
Requires=sshdgenkeys.service
After=syslog.target
After=sshdgenkeys.service
[Service]
ExecStartPre=/bin/mkdir -m700 -p /var/run/sshd
ExecStart=-/usr/sbin/sshd -i
ExecReload=/bin/kill -HUP $MAINPID
StandardInput=socket
[Unit]
Description=Daemon which acts upon network cable insertion/removal
[Service]
Type=forking
PIDFile=/run/ifplugd.%i.pid
ExecStart=/usr/sbin/ifplugd %i
SuccessExitStatus=0 1 2
[Install]
WantedBy=multi-user.target
Because I use a custom installation of pyload I had to write my own init script.
my setup:
Here is the init script I use:
#!/sbin/runscript
depend() {
need net
}
PYLOAD_USER=${PYLOAD_USER:-root}
PYLOAD_GROUP=${PYLOAD_GROUP:-root}
PYLOAD_CONFDIR=${PYLOAD_CONFDIR:-/etc/pyload}
PYLOAD_PIDFILE=${PYLOAD_PIDFILE:-/var/run/${SVCNAME}.pid}
PYLOAD_EXEC=${PYLOAD_EXEC:-/usr/bin/pyload}
start() {
ebegin "Starting pyload"
start-stop-daemon --start --exec "${PYLOAD_EXEC}" \
--pidfile $PYLOAD_PIDFILE \
--user $PYLOAD_USER:$PYLOAD_GROUP \
-- -p $PYLOAD_PIDFILE --daemon ${PYLOAD_OPTIONS}
eend $? "Failed to start pyload"
}
stop() {
ebegin "Stopping pyload"
start-stop-daemon --stop \
--pidfile $PYLOAD_PIDFILE \
--exec "${PYLOAD_EXEC}"
eend $? "Failed to stop pyload"
}
Here is the configuration:
PYLOAD_USER=pyload
PYLOAD_GROUP=pyload
PYLOAD_EXEC=/home/pyload/bin/pyLoadCore.py
PYLOAD_CONFDIR=/home/pyload/.pyload
PYLOAD_PIDFILE=/home/pyload/${SVCNAME}.pid
PYLOAD_OPTIONS=
Open you ~/.ssh/config on your local machine and add the following lines:
Host webtunnel
HostName domain.tld # replace this with your ip or domain name of your server
DynamicForward 1080
User myuser # replace this with your ssh login name
next connect to your server like this
ssh webtunnel
This opens a socks connection on your local machine on port 1080. Now you are able to set up every application to use this proxy. These are the common required settings:
Server: localhost
Port: 1080
Proxy-Type: SOCKS5
Personally I use FoxProxy Basic extension for firefox to fast setup a connection, whenever needed.
]]>Short after writing this entry, I discover a good one.
Nginx don’t understand the .htaccess, which is shipped with owncloud. So some rewrites, required by the webdav implementation, aren’t applied. To get owncloud running, some additional options are necessary:
upstream backend {
unix:/var/run/php-fpm.sock; # <--- edit me
}
# force https
server {
listen 80;
server_name cloud.site.com;
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
listen 443 ssl;
ssl_certificate /etc/ssl/nginx/nginx.crt;
ssl_certificate_key /etc/ssl/nginx/nginx.key;
server_name cloud.site.com; # <--- edit me
root /var/web/MyOwncloud; # <--- edit me
index index.php;
client_max_body_size 20M; # set maximum upload size
access_log /var/log/nginx/cloud.access_log main;
error_log /var/log/nginx/cloud.error_log info;
location ~* ^.+.(jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
expires 30d;
access_log off;
}
# deny direct access
location ~ ^/(data|config|\.ht|db_structure.xml|README) {
deny all;
}
location / {
# these line replace the rewrite made in owncloud .htaccess
try_files $uri $uri/ @webdav;
}
location @webdav {
include fastcgi_params;
fastcgi_pass backend;
fastcgi_param HTTPS on;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass backend;
fastcgi_param HTTPS on;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
Additionally I added these lines to the default /etc/nginx/fastcgi_params:
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;
So it does looks like this:
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param HTTPS $https if_not_empty;
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
# PHP only, required if PHP was built with --enable-force-cgi-redirect
fastcgi_param REDIRECT_STATUS 200;
PHP +++ If your upload size is still lower than the one set in nginx’s configuration, increase the size in the php.ini as described here