Friday, October 9, 2020

Authenticating to MongoDB using a keyfile

Sometimes, it might be useful to authenticate to MongoDB using a keyfile. (This requires it to be configured to use a keyfile, mostly for replicaset / sharding internal authentication)

Use cases for this include:

  • Password resets
  • Authenticating on shards from a config server with the same credentials for all shards (e.g. for keyhole, which assumes that that is an option, which results in errors if shards use different passwords)
  • Auto-detected credentials for scripts that need to run (as root) on multiple nodes
  • Automation of operations on the database (e.g. Creating a user using Ansible, without knowing if a user already exists)

Keyfile authentication uses SCRAM (the exact variant depends on the MongoDB version), in the same way that user authentication uses it. (After stripping all whitespace)

Knowing this, I decided to search around for references of "SCRAM-SHA-1" and "keyfile" and then came across info indicating that the username used is "__system". I found hints at this in the last diff on this change in the MongoDB source code.

In order to log in to the local mongodb instance using the keyfile /etc/mongo.keygile (as root that it can be read), the following command can be used:
mongo -u __system -p "$(tr -d '[:space:]' < /etc/mongo.keyfile)" --authenticationDatabase admin

If the connection string is used instead, the password needs the be URL encoded: (This version uses Perl for URL encoding, which might not be available everywhere)
mongo "mongodb://__system:$(tr -d '[:space:]' < /etc/mongo.keyfile | perl -ple 's/([^A-Za-z0-9])/sprintf("%%%02X", ord($1))/seg')@localhost:27017/?authSource=admin"

Note: This will not work on a YAML keyfile, as supported in MongoDB 4.2 or later. (The password for the system user should still be possible to extract using other methods though)

A Percona blog post that also mentions this method

Friday, January 17, 2020

Euro cylinder lock fixing screw sizes

Euro profile cylinder locks use M5x70mm countersunk machine screws.

(The size is hard to find, the M5 is in the relevant standard, DIN 18252, the length needs to be dug out of forums)

Wednesday, May 2, 2018

Managing FirewallD ipsets and services using Ansible

Ansible's FirewallD module (2.4, 2.5 and at least up to 1.4.0 of the ansible.posix collection) supports managing a subset of FirewallD functionality.

Currently, the creation and management of services and ipsets are not supported.

The module is being refactored to allow for support of additional functionality.

However, since FirewallD's permanent config is stored in XML files, it is possible to deploy services and ipsets using Ansible's template module instead.

For the functionality that I need (services consisting of just ports) and ipsets containing networks or IPs, I use these templates:

firewalld-ipset.xml.j2
<?xml version="1.0" encoding="utf-8"?>
<ipset type="hash:{{ item.type }}">
  <description>{{ item.description }}</description>
{% if item.options is defined %}
{% for option in item.options %}
  <option name="{{ option.name }}" value="{{ option.value }}"/>
{% endfor %}
{%endif %}
{% for entry in item.entries if entry != "" %}
  <entry>{{ entry }}</entry>
{% endfor %}
</ipset>
firewalld-service.xml.j2
<?xml version="1.0" encoding="utf-8"?>
<service>
  <description>{{ item.description }}</description>
{% if item.ports is defined %}
{% for port in item.ports %}
<port protocol="{{ port.type }}" port="{{ port.port }}"/>
{% endfor %}
{%endif %}
{% if item.protocols is defined %}
{% for proto in item.protocols %}
<protocol value="{{ proto }}"/>
{% endfor %}
{%endif %}
</service>

Variables need to be set up to configure something using these tasks. Additional entries can be added to deploy multiple services / ipsets with a single task.

sample_ipsets:
- filename: private-ips.xml
  description: Private IPs IPset
  type: net
  entries:
  - 10.0.0.0/8
  - 192.168.0.0/16
  - 172.16.0.0/12
- filename: monitoring-servers.xml
  description: Monitoring server IPs
  type: ip
  entries:
  - 192.168.0.1
  - 10.2.3.4
- filename: monitoring-servers-ipv6.xml
  description: Monitoring server IPv6s
  type: ip
  options:
  - name: family
    value: inet6
  entries:
  - 2001:0db8:85a3:0000:0000:8a2e:0370:7334
  - 2001:db8::2:1

sample_services:
- filename: nrpe.xml
  description: Nagios NRPE service
  ports:
    - type: tcp
      port: 5666
- filename: ip-in-ip.xml
  description: IP-in-IP encapsulation
  protocols:
    - ipencap
- filename: dns-and-ntp.xml
  description: Service for easily opening NTP and DNS
  ports:
    - type: udp
      port: 53
    - type: udp
      port: 123

Sample tasks used to deploy the configs based on these templates:

- name: FirewallD services
  ansible.builtin.template:
     src: firewalld-service.xml.j2
     dest: /etc/firewalld/services/{{ item.filename }}
     owner: root
     group: root
     mode: 0644
  with_items: "{{ sample_services }}"

- name: FirewallD IPsets
  ansible.builtin.template:
     src: firewalld-ipset.xml.j2
     dest: /etc/firewalld/ipsets/{{ item.filename }}
     owner: root
     group: root
     mode: 0644
  with_items: "{{ sample_ipsets }}"

# You might want to use a handler for this instead
# It might be possible to do with the systemd module as well instead
# The will cause any non-permanent changes to be lost
- name: Reload FirewallD
  ansible.builtin.command: firewall-cmd --reload

Monday, August 28, 2017

Solaris 10 - fiocompress (UFS file compression) settings

Bernd Schemmer has an interesting post about using fiocompress for file-system level compression of individual files on UFS file systems...

I did some experimentation and found a few more things:

  • Increasing the blocksize from the default of 8192 increases the compression ratio
  • The compression ratio seems to be somewhere between gzip and compress (on a text file)
  • Setting the blocksize to 65536 (64KiB) results in an unreadable and undeletable file (at least with normal tools on test system. This is fixed in the latest recommended patch bundle)
  • Using blocksizes below 8192 also results in unusable files. (I only tested multiples of 2)(This is fixed in the latest recommended patch bundle)
  • fiocompress uses an ioctl call to mark a file as compressed if -m is passed. No method to unmark a marked file exists, even in the filesystem driver. (It is possible to modify the OpenSolaris fiocompress to add an option to just mark (a previously compressed) file as compressed) (Look at ufs_vnops.c for the _FIO_COMPRESSED ioctl implementation)


Test compression code:
ls -lah testfile.txt; du testfile.txt; du -h testfile.txt; for b in 256 512 1024 2048 4096 8192 16384 32768 65536; do fiocompress -b $b -c -m testfile.txt testfile.txt$b; done

Results: (including other common formats)
$ls -lah testfile.txt*
testfile.txt1024: Operation not applicable
testfile.txt2048: Operation not applicable
testfile.txt256: Operation not applicable
testfile.txt4096: Operation not applicable
testfile.txt512: Operation not applicable
testfile.txt65536: Operation not applicable
-rw-r--r--   1 user   group      101M Mar 11 10:26 testfile.txt
-rw-------   1 user   group      4.4M Mar 11 10:29 testfile.txt.7z
-rw-r--r--   1 user   group      5.2M Mar 11 10:26 testfile.txt.bz2
-rw-r--r--   1 user   group      7.2M Mar 11 10:28 testfile.txt.gzip
-rw-r--r--   1 user   group      8.9M Mar 11 10:28 testfile.txt.gzip-1
-rw-r--r--   1 user   group      6.7M Mar 11 10:28 testfile.txt.gzip-9
-rw-r--r--   1 user   group       13M Mar 11 10:27 testfile.txt.Z
-rw-r--r--   1 user   group      7.2M Mar 11 10:32 testfile.txt.zip
-rw-r--r--   1 user   group      101M Mar 11 10:55 testfile.txt16384
-rw-r--r--   1 user   group      101M Mar 11 10:55 testfile.txt32768
-rw-r--r--   1 user   group      101M Mar 11 10:55 testfile.txt8192

$du testfile.txt*
206544  testfile.txt
9040    testfile.txt.7z
10624   testfile.txt.bz2
14848   testfile.txt.gzip
18240   testfile.txt.gzip-1
13840   testfile.txt.gzip-9
27632   testfile.txt.Z
14848   testfile.txt.zip
18784   testfile.txt16384
16480   testfile.txt32768
22656   testfile.txt8192
$du -h testfile.txt*
 101M   testfile.txt
 4.4M   testfile.txt.7z
 5.2M   testfile.txt.bz2
 7.2M   testfile.txt.gzip
 8.9M   testfile.txt.gzip-1
 6.8M   testfile.txt.gzip-9
  13M   testfile.txt.Z
 7.2M   testfile.txt.zip
 9.2M   testfile.txt16384
 8.0M   testfile.txt32768
  11M   testfile.txt8192


eFiling and eHomeAffairs in Chrome

Google bundles Flash with Chorme (making it the only option for some things on GNU/Linux), however they have recently started phasing out Flash. As part of that, Chrome hides the presence of Flash to websites, but gives users an option to enable Flash on the site if the page attempts to use Flash. eHomeAffairs and SARS eFiling gives an error when Flash is not detected and doesn't attempt to load the content anyway, which means that the "Click to run" option does not work.

Recommeneded method: MyBroadband documented one method to get eFiling working.. For eHomeAffairs, the address to add to the list is "https://ehome.dha.gov.za".

Alternative, works on many more sites: Another option is to configure Chrome not to hide Flash from websites. This can be done by visiting "chrome://flags" in the address bar and setting the "Prefer HTML5 over Flash" setting to "Disabled" ("chrome://flags/#prefer-html-over-flash" will take you directly to the setting). You need to restart Chrome for the setting to take effect. The content will then load. (Tested on Chrome 60). On some sites, the Flash content may still be click-to-run, however Chrome seems to currently run it automatically on both eHomeAffairs and SARS eFiling when this flag is set. (Chrome will attempt to detect important Flash content and enable that automatically)

Update: The chrome://flags method stopped working in Chrome 61. Adding the site to content settings as being allowed to run Flash as per the MyBroadband article still works.

Friday, February 24, 2017

OpenSSL cipher suite without forward secrecy

Firstly, you should not use this in normal use.

Sometimes, you might need to debug a problem that occurs behind TLS.

Wireshark can decode TLS traffic, given the session keys, or if forward secrecy ciphers was not used, the private key.

In the case of web traffic, SSLKEYLOGFILE can tell NSS, used by some browsers to log the keys. This is a better method than the one described here, but it is not an option if other clients are used, say in the case of SMTP.

An (OpenSSL) ciphersuite setting that excludes ciphers providing forward secrecy, while keeping strong ciphers is:
HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4:!DHE:!ECDHE:!EDH:!EECDH

This should be avoided in production and should only be used for debugging.

Wednesday, November 9, 2016

Handling messages forwarded as attachment by Outlook with MIME::Parser in Perl

Outlook sends emails that are forwarded as attachment with an .eml extension and the content-type set to application/octet-stream. According to RFC 1341, message/rfc822 should be used. The Perl module, MIME::Parser will automatically parse message/rfc822 attachments, which is useful if you want to do automated processing on an email and its attachments. Outlook's use of application/octet-stream breaks this.

It is possible to fix this. I initially attempted to change the contetn-type and rerun the parser on the file, but that resulted in an empty part. The problem is that according to RFC1341, the Content-Transfer-Encoding field must be 7bit, 8bit or binary for message/rfc822 (Outlook uses base64). Once this is corrected, it works.

A Perl sample: (in this case, the email forwarded as attachment is the second attachment)
#!/usr/bin/perl

use warnings qw(all);
use MIME::Parser;
use strict;

my $fn = '/tmp/input_file.eml';

my $parser = new MIME::Parser;

$parser->output_to_core(1); # Disable the creation of temporary files

my $entity = $parser->parse_open($fn);
$entity->dump_skeleton;   # View initial structure

# Fix the fields
$entity->parts(1)->head->replace('Content-Type','message/rfc822');
$entity->parts(1)->head->replace('Content-Transfer-Encoding','7bit');

# Get encoded message
my $message = $entity->as_string;
#Re-parse
$entity = $parser->parse_data($message);

$entity->dump_skeleton;          # show final structure

\

Here is a general function to handle these. It uses undocumented interfaces, since there does not seem to be a documented method to replace a part with another one.
sub handle_forwarded_messages
{
   my($parser,$entity, undef) = @_;
   return undef unless ($entity && $parser);

   my($part);

   # Recursively process multipart entities, based on number of parts
   if (scalar $entity->parts) # If we have sub-parts
   {
      # Warning, next line uses undocumented interfaces..
      for (my $i = 0; $i <= $#{$entity->{ME_Parts}}; $i++) {
         $part = $entity->{ME_Parts}[$i];
         # Warning, next code line uses undocumented interfaces..
         # Replace part with its expanded version... This seems to be the only way
         $entity->{ME_Parts}[$i] = &handle_forwarded_messages($parser,$part);
      }
   } else { # Once we are at a level that does not have sub-parts...
      # Replace forwarded messages with properly expanded versions...
      if ($entity->effective_type eq 'application/octet-stream' &&
              $entity->head->recommended_filename =~ /\.eml$/) {
          $entity->head->replace('Content-Type','message/rfc822');
          $entity->head->replace('Content-Transfer-Encoding','8bit');
          my $entity_tmp = eval { $parser->parse_data($entity->as_string) };
          $entity = $entity_tmp unless ($@ || $parser->results->errors);
          # And see if they have more levels...
          $entity = &handle_forwarded_messages($parser,$entity);
      }
   }
   # Return the processed result
   return $entity;
}