Sunday, 29 November 2015

Running a meteor shell on a standalone server

For those that want to connect to meteor's shell running on a standalone/self-maintained server, the standard command you use in the development environment doesn't work. Fortunately you can 'trick' it into allowing the `meteor shell` command.
APPDIR=/opt/bitnami/apps/myapp
export METEOR_SHELL_DIR="$APPDIR/.meteor/local/shell"
# other settings
# ...

exec node $APPDIR/bundle/main.js
cd /opt/bitnami/apps/myapp
mkdir -p .meteor/local/shell
echo > .meteor/packages
echo 'METEOR@1.2.1\n' > .meteor/release

meteor shell

Monday, 5 October 2015

Javascript (ECMAScript 5.1) Refresher

I’ve been recently working heavily on JavaScript based applications in both Node.js and a Chrome extension. After working mostly on Java, I thought I’d share the syntax, conventions and the new features of ECMAScript 5.1 and parts of the still pending version, ES6/ECMAScript 2015, that I’ve come across.

Comparisons Operators

==
equal to
===
equal value and equal type
!=
not equal
!==
not equal value or not equal type

I saw something else I thought was a fancy notation:

if (!!something)
//which is not a special operator, it just converts it to boolean, i.e a double not.
(!!something) === (!(!something))
//this would only be useful if you want to set a boolean to something truthy, i.e.
var a = "somevalue";
var asBoolean = !!"somevalue";

Object Initializer

Something I had thought was not a fully adopted syntax, was being able to specify the getter / setter functions in the initializer. (MDN Object initializer)
var o = {
func1: function (p1, p2) {},
get prop1() { return this._prop1; },
set prop1(value) { this._prop1 = value; },
};
This creates:

  1. a function called func1: o.func1(param1, param2)
  2. setter function  for "prop1": o.prop1 = 'newvalue';
  3. getter functions for "prop2": var oldValue = o.prop1;
Which is equivalent to more the verbose way:
var o = Object.create({}, {
func1: {
value: function(p1, p2) { },
},
prop1: {
get function() { return this._prop1; },
set function(value) { this._prop1 = value; },
},
});

Other odd syntax


The OR operator:
// this allows fall-back assignment in case a variable is not defined or set
var myNamespace = window.myNamespace || {};
// which is equivalent to: (there are many more ways to write this but this is the simplest direct translation)
var myNamespace;
// if you just use myNamespace not window.myNamespace you'll get a variable not defined error
if (window.myNamespace) // if defined and set to a value
myNamespace = window.myNamespace;
else
myNamespace = {};

This is sometimes written like this:

var myNamespace = window.myNamespace || window.myNamespace={};

Which although it seems wrong, is actually valid syntax. An assignment of a variable statement returns a value, so (window.myNamespace={}) is equal to {}. It’s nice and compact but does border on readability vs magic syntax (those trying debug Groovy scripts will understand what I mean).


Next the AND operator:
DEBUG && console.log("My debug statement");

// is a compact way of writing
if (DEBUG)
console.log("My debug statement");
Don't abuse this one, as its easy to make JavaScript look like a completely different language. My recommendation - used for a few scenarios and use it consistently. Pretty code, is readable code.
The COMMA operator:

This one I really did think was some magic syntax until I looked it up. By the example given on MDN comma operator reference my best bet is it was intended only for the for loop, which in hindsight I vaguely knew it was legal syntax there.
This just executes each statement in turn, returning the value of the last statement.

if (doSomething)
return retValue = invokeMethod(), console.log(retValue), retValue;
else
return false;

Scope & Closures

Reading through a lot of the articles there is a lot of talk about closures and warnings about scope when using nested function. Reading some of the examples out there could lead you to code over defensibly (which I realised I had been doing) or just left wondering what the big deal is. Function scoped variables in JavaScript behave almost the way you would think at first glance.

  • Nested functions are able to access the variables of parent functions scope (all the way up to the global scope --> window)
  • Parent functions can't access the variables of nested functions
  • Nested functions can still access the parent functions variables after the parent function(s) have finished executing --> they get a copy of the stack
  • Each invocation of the parent function creates a new stack so someFunc(1) defined variables won't overlap with someFunc(2)
  • If myFunc1 and myFunc2 were defined in parentFunc:
    var obj = parentFunc(123);
    obj.myFunc1();
    obj.myFunc2();
    Then myFunc1 and myFunc2 could both alter variables defined in parentFunc

This last point is the only one that you have to be wary of. This blog post has more detail, however here is the snippet which highlights the problem:

function f() {
var arr = [ "red", "green", "blue" ];
var result = [];
for(var i=0; i<arr.length-1; i++) {
var func = function() {
return arr[i];
};
result.push(func);
}
return result;
}

This has the effect of every function in the result returning "blue" as each function sees the latest value of i, which is the length of the array.


Function expressions, declarations etc

For the most part it doesn't matter whether you use a function declaration or a function expression, there's listing of all the function types and their minor differences here: (MDN Functions reference). The only difference that should affect most people is:


  • Function declarations can be used before they are declared as they are 'hoisted' automatically by the parser.
  • Function expressions on the other hand, need to be defined before they are invoked or assigned.

Classes


There are many different patterns people use to create classes that together with the ECMAScript 6 shorthand. I won’t show them all but here is the one I use for ECMAScript 5.

var Person = function Person(name) {
this.name = name;
this.canTalk = true;
};
_.extend(Person.prototype, {
prop1: null,
prop2: 123,
prop3: {},
prop4: "",
func1: function () {
console.log("%s's prop2 is %s",this.name, this.prop2);
}
});

var tim = new Person("Tim");
tim.func1();
// response is:
// > Tim's prop2 is 123

I’ve seem some people use Person.prototype = {..} which has the effect of overwriting the Person.prototype.constructor property which doesn’t have immediate problems but can cause issues with inheritance and some frameworks. This Stackoverflow discussion shows why.

I use underscore.js in this example, but jQuery does the same, as a shorthand for having to write Person.prototype.prop1 = null; etc to set all the methods and properties on the prototype so they are shared between object instances.

ECMAScript 6 allows you to easily define the functions in a class but doesn’t allow you to set the starting value for instance variables on the prototype. i.e. you’ll still need to do _.extend(Person.prototype, { prop1: … prop2: .. }); but you’ll just define the functions with the class.


Namespaces


In JavaScript, namespaces (aka packages/modules) are written simply as objects:
// global namespace
var MYAPP = MYAPP || {};
// sub namespace
MYAPP.event = {};

They’re nothing magical and not built into the language, more of a technique of scoping your variables and functions so they don’t clash with the numerous JavaScript libraries now in use. MDN has an intro on namespaces in their Object Orientated section. There’s a Stackoverflow discussion: What's the best way to create a JavaScript namespace?

The syntax below I found the best supported by IDE’s as any good programmer knows, sometimes the best way to do something is the quickest/more efficient way by using the supported libraries/tools to make our lives easier. This uses a ‘immediately-invoked function expression’ (IIFE) that passes in the global object (in this case the window) to define the namespace so it can be referenced by other code. This also makes it easier to wire into something that doesn’t run in the browser, i.e Node.js, however the syntax below is better suited to that..
((function(global) {
/**
*
*/
MyNamespace.method1 = function() {
//... do something
};

//export
global.MyNamespace = MyNamespace;
})(window);

I came across this article Essential JavaScript Namespacing Patterns which has the next one. This uses an IIFE to execute the namespace but return an object defining the references to the ‘public’ or exported methods.
var namespace = (function () {
// defined within the local scope
var privateMethod1 = function () { /* ... */ }
var privateMethod2 = function () { /* ... */ }
var privateProperty1 = 'foobar';
return {
// the object literal returned here can have as many
// nested depths as you wish, however as mentioned,
// this way of doing things works best for smaller,
// limited-scope applications in my personal opinion
publicMethod1: privateMethod1,
//nested namespace with public properties
properties:{
publicProperty1: privateProperty1
},
//another tested namespace
utils:{
publicMethod2: privateMethod2
}
...
}
})();


Modules


There are 3 basic flavours to the modules, common.js, Asynchronous Module Definition: AMD (yeh I know, easily confused with the chip manufacturer) and ECMAScript 6 Modules.

The last namespace sample I gave is pretty close to the AMD module definitions, this is taken from require.js:
//my/shirt.js now has some dependencies, a cart and inventory
//module in the same directory as shirt.js
define(["./cart", "./inventory"], function(cart, inventory) {
//return an object to define the "my/shirt" module.
return {
color: "blue",
size: "large",
addToCart: function() {
inventory.decrement(this);
cart.add(this);
}
}
}
);

I personally prefer the common.js style, but since we have to work in the browser as well as the server here is the require.js wrapper for common.js modules:
define(['require', 'exports', 'module', 'dep1'], function (require, exports, module, dep1) {
exports.tripple = function tripple(val1) {
return val1 * 3;
};

//require is useful for cyclic dependencies
exports.dependantFunc = function () {
return tripple(require('dep2').getValue());
};
});

In a later post I will go into more detail on how to use ECMAScript 6 (via Babel and require.js) in the browser.


Documentation (JSDoc)


Writing documentation is the bane of nearly every developer’s existence. However in JavaScript it’s a must. Due to JavaScript’s very loose typing of variables, it makes it near impossible for a modern IDE to give you good autocomplete suggestions which make coding that much more efficient. This lack of typing has resulted in many languages which are compiled into JavaScript such as: TypeScript (which IntelliJ actually uses for a source of it’s API definitions for open source libraries).

The JavaDoc for JavaScript: JSDoc3 / @use JSDoc, supports defining types for variables, self/this, return values etc. Together with the added syntax from Google’s Closure Compiler, you can get near Java-like API autocomplete suggestions in IntelliJ. However, I did have some difficulties with modules, which if I ever remember where I got it right, I’ll post up.


Strict Mode


Strict Mode (MDN), is a feature added in ECMAScript 5 which turns on restrictions in the scope it's defined that help catch coding typos and also helps the compiler boost the performance. Mistakes take were quietly worked around in non-strict mode are raised as errors.
//use strict mode globally within this file
"use strict";

function someStrictFunc() {
//use strict mode within this function, single or double quotes
'use strict';

Additional Topics


Promises

Many frameworks use them and as a result they’ve been included in ECMAScript 6. Chrome already supports them but for other browsers you’ll need a shim like: promise.js. They allow much better chaining of dependant functions.

Mixins

A different take at inheritance. Those who’ve used less or sass will recognise the concept. A fresh look at JavaScript Mixins

General OO (namespaces and classes)

This is another good article covering namespaces and classes in more detail. Preparing Yourself for Modern JavaScript Development

Friday, 3 July 2015

Cygwin, access control, default groups and just getting it playing nice

correcting current and default permissions

If you've been messing with your permissions on copying data across from another NTFS system, some of the owners/groups may be off and the even after correcting the owners and their permissions, any new files don't have the right defaults. This simple script replaces the ACL records for each file and directory, giving the default permissions specified.
find $1 -type f -exec setfacl -f facl {} \;
find $1 -type d -exec setfacl -f dacl {} \;
dacl - directory permissions
user::rwx
group::rwx
other:r-x
default:user::rwx
default:group::rwx
default:other:r-x
facl - File Permissions
user::rw-
group::rw-
other:r--
default:user::rw-
default:group::rw-
default:other:r--

Specifying the default groups for users

The documentation for cygwin is in depth but doesn't simply answer the question: How do I set the default group for a user? (in the out of the box configuration).
The starting point is the mkpasswd utility. These are the following steps, assuming non-special accounts - the SYSTEM ("Local System") account can't be changed.
  1. > mkpasswd -l -u MyUser >> /etc/passwd
    This creates a mapping record in the passwd file with the default group.
  2. > id MyGroup
    uid=197613(mygroup) gid=197613(mygroup) groups=11(Authenticated Users),197613(mygroup)
    The gid is the group id we need, i.e. 197613
  3. > vi /etc/passwd
  4. Change the 4th field to the gid above i.e.
    MyUser:*:197608:197121:U-MY-PC\MyUser,S-1-5-21-818915124-687840057-3584311183-1000:/home/MyUser:/bin/bash
    becomes
    MyUser:*:197608:197613:U-MY-PC\MyUser,S-1-5-21-818915124-687840057-3584311183-1000:/home/MyUser:/bin/bash

sudo - run command as Administrator

This isn't a full implementation but it elevates the current user to admin rights.
Getting the escaping to work properly is tricky, see this article for how the rules are applied with bash.
# TODO the hardcoded path is a bit hacky
if [ "$1" == "-" ]; then
 #echo "interactive shell"
 cygstart --action=runas "C:\\cygwin64\\bin\\mintty.exe" -h always
else
 #echo "evelavated command"
 CMD="cd $(pwd); $@"
 #note the order of the quotes below, we want to send the cmd as a single arg but surrounded by double quotes
 #use cygstart -v to debug
 cygstart --action=runas "C:\\cygwin64\\bin\\mintty.exe" -l sudo.log -h error -e /usr/bin/bash -l -c "\" $CMD \""
fi

#to use:
./runas-admin.sh somescript.sh param1
# or to open a shell
./runas-admin.sh -
The latest version is kept here: runas-admin.sh

su - switch user

This is a near drop-in replacement, not all options are supported and it differs from the standard linux in that in opens up a new window instead of using the same terminal.
if [ $DO_LOGIN == 1 ]; then
 TTYCMD="- $COMMAND"
else
 TTYCMD="$COMMAND"
fi

if [ $USER != 'root' ]; then
 cygstart --hide cmd.exe /c "\"\"%WINDIR%\\system32\\runas.exe\" /savecred /user:$USER \"$(cygpath -w /usr/bin/mintty.exe) $TTYCMD \"\""
else
        cygstart --action=runas "C:\\cygwin64\\bin\\mintty.exe" -h error -e $TTYCMD
fi

# to use, loging into a new shell with MyUser:
./runas-admin.sh - MyUser
Download the full script here: switch-user.sh

Alternate method: cyglsa

Reading through the security documentation, it gives different instructions on how to get login tokens (at a programatic level). From what I can tell its for sshd and/or services only, there is no mention of support for su/sudo commands.
/usr/bin/cyglsa-config

Friday, 26 June 2015

Getting PHP FastCgi Process Manager (FPM) and nginx working in cygwin

Despite the popularity of nginx and php, I was surprised that it wasn't easiy to find a working configuration for PHP-FPM (fast cgi) with a nginx server in front, running on cygwin.
Once I had the right fragments of settings, it was a case of systematically trying them all out.

/etc/php5/php-fpm.conf:

[global]
pid = /var/run/php-fpm.pid

;note: i create a /var/log/php dir owned by the service user/group
;this allows the permissions to be inherited easily on the filesystem
error_log = /var/log/php/fpm-global.log

; cygwin user default is 256
rlimit_files = 1024

;pool configuration, having a pool config per site means you can easily have a separate log file
[www]
;they have to be set but the cygwin version ignores them
;user=service-user
;group-service-group

; The address on which to accept FastCGI requests.
listen = 127.0.0.1:8001
; or
; listen = tmp/php-cgi.socket
; for socket unset these:
listen.owner=service-user
listen.group=service-group

; this allows the process pool to be queried if it appears to be bogging down
pm.status_path = /status

php_admin_flag[log_errors] = on

/etc/php5/php.ini:

The magic to get the pretty permalinks working is on the wordpress site.
error_reporting = -1
display_errors = On
display_startup_errors = On
log_errors = On
log_errors_max_len = 0

/etc/nginx/nginx.conf:

error_log  /var/log/nginx/error.log;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}


http {
    access_log off;
    
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    keepalive_timeout  65;

    # Upstream to abstract backend connection(s) for php
    upstream php {
      #server unix:/tmp/php-cgi.socket;
      server 127.0.0.1:8001;
    }

    include site1.conf;
}

/etc/nginx/site1.conf:

server {
  listen       8000;
  server_name  localhost;
  
  ## This should be in your http block and if it is, it's not needed here.
  index index.php;
  root   /cygdrive/c/Projects/site1/wordpress/;
  
  location / {
      try_files $uri $uri/ /index.php?$args;
  }
  rewrite /wp-admin$ $scheme://$host$uri/ permanent;
  
  # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
  #
  location ~ [^/]\.php(/|$) {           
    fastcgi_split_path_info ^(.+?\.php)(/.*)$;
    if (!-f $document_root$fastcgi_script_name) {
      return 404;
    }
    # This is a robust solution for path info security issue and works with "cgi.fix_pathinfo = 1" in /etc/php.ini (default)
    
    include fastcgi.conf;
    fastcgi_index index.php;
    fastcgi_pass php;
    
    # fastcgi_intercept_errors on;
    fastcgi_connect_timeout 40s;     

    # set the log file here so each site is different
    fastcgi_param PHP_VALUE "error_log = /var/log/php/php-site1.log";
  }
    
  # pass status page request on
  location ~ ^/(status|ping)$ {
    access_log off;
    allow 127.0.0.1;
    deny all;
    
    include fastcgi.conf;
    fastcgi_pass php;
  }
}

Finally, I had some problems connecting from cygwin php to a windows native mysql. Discovered I just had to use the localhost IP to force it over TCP/IP (instead of trying to use unix sockets).

/var/www/wordpress/wp-config.php:

/** MySQL hostname */
define('DB_HOST', '127.0.0.1');

Update 2015/07/02:

I've hit a few problems using cygwin and it's nearly wholly based around permissioning. I've written up a summary of my ad-hoc solution, mainly because I couldn't easily find sample setups and wasted a lot more time than one should on a popular platform.

The launch scripts themselves are a bit hacky but these are the best I've got so far. The cygrunsrv service launcher directly calls the service so you don't get a shell script (though this could easily be changed). I've found running as the "Local System" account causes too many problems relating to group access so setting up a dedicated user is the easiest.

Here are the 2 service install scripts:


/etc/rc.d/init.d/ngix
/etc/init.d/php-fpm


Thursday, 25 June 2015

Remote debugging PHP from phpStorm

Despite all the documentation out there, it seemed to take me longer than it should. Although in hindsight, it could have been working originally as phpStorm's indicator that it's debugging is subtle.

Debug session within phpStorm

Two articles to get you started:

To setup follow do the following:
  1. For chrome (probably the easiest), install this plugin: Chrome Plugin
  2. Configure the plugin and select phpStorm as your IDE.
  3. Next sign onto your remote server and enable xdebug module
    1. for cPanel, click select php version (only 5.4 and 5.5 supported)
    2. update version
    3. check the xdebug module
    4. click update
  4. Edit the php.ini, e.g. for cPanel, edit or create: /home/[username]/public_html/.user.ini
    ; Settings for xdebug
    module was already loaded on this machine:
    ; zend_extension=/opt/alt/php54/usr/lib64/php/modules/xdebug.so
    xdebug.remote_host=[your modems IP address]
    xdebug.remote_port=[a port forwarded through your modem to firewall, e.g. 9800]
    xdebug.remote_enable=1
    
  5. Setup phpStorm's port:
    Settings->Languages & Frameworks-->PHP-->Debug Xdebug port = [remote_port] set above
  6. Setup phpStorm remote debug settings:
    1. Run --> Edit configurations
    2. Add --> PHP Remote Debug
    3. Give it a name
    4. Ide key: "PhpStorm"
    5. Servers (select or add via the "...")
      1. Host: your remote server host
      2. Port: http port usually 80
      3. Debugger: xdebug
      4. Check "use path mappings"
      5. Setup any mappings
      6. Click validate
      7. Click close
    6. Click ok
  7. Run --> Start listening for PHP Debug Connections
  8. In the browser, click the debug icon in the address bar and select debug
  9. visit your sites URL
    You may need to go to the site first, the select debug, then refresh the page.
Note: .user.ini is cached by apache, so it may not work instantly.
When it works the page won't seem to load and in phpStorm the debugger tab should show a clickable green arrow ("Resume Program Execution") and it should show the current frame & variables.

I went through a couple of settings to get here so I may have missed a little tweak along the way.

Profiler

These settings were taken from: Diagnosing slow PHP execution with Xdebug and KCachegrind.
/home/[username]/public_html/.user.ini
; Settings for xdebug
; module was already loaded:
; zend_extension=/opt/alt/php54/usr/lib64/php/modules/xdebug.so
xdebug.profiler_output_dir=/home/[username]/php
xdebug.profiler_append=On
xdebug.profiler_enable_trigger=On
xdebug.profiler_output_name="%R-%u.trace"
xdebug.trace_options=1
xdebug.collect_params=4
xdebug.collect_return=1
xdebug.collect_vars=0
xdebug.profiler_enable=0
The relevant .trace file can be downloaded and loaded into phpStorm via: Tools->Analyse Xdebug Profiler Snapshot.
See Analyzing Xdebug Profiling Data for more information.

Wednesday, 24 June 2015

Good defaults for Wordpress (on a cPanel server)

After spending some time finding problems in a few Wordpress sites, both local in my dev environment and on a cPanel hosted server, I finally decided to sit down and work out the best base settings for each environment. The settings I've picked are mainly around dev and debugging, however I will try and keep this post updated as I come across new settings.

php.ini settings

Local Dev
These settings can be set in your local dev env php.ini (e.g. in c:\php-installdir\php.ini)
display_errors = On
display_startup_errors = On
log_errors = On
error_log = C:/temp/php-errors.log
error_reporting = -1
Production / hosted
On cpanel servers, create a file in public_html: ".user.ini" (/home/[user]/public_html/.user.ini). Its without quotes but starting with a dot. Similar contents but limited by what you can set.
display_errors = Off
error_log = /home/[user]/logs/php-errors.log
error_reporting = -1
Depending on the server you may be able to alternatively use .htaccess (/home/[user]/public_html/.htaccess):

php_value display_errors Off
php_value error_log /home/[user]/logs/php-errors.log
php_value error_reporting -1

wp-config.php settings

Local Dev
// outputs more errors
// https://codex.wordpress.org/WP_DEBUG
define('WP_DEBUG', true);
// don't set WP_DEBUG_LOG=true as it overrides error_log set for php.ini
define('WP_DEBUG_LOG', false);
define('WP_DEBUG_DISPLAY', false);

// stop external http connections (this gives false errors when running locally)
define('WP_HTTP_BLOCK_EXTERNAL', true);

// not needed in dev most of the time
define('DISABLE_WP_CRON', 'true');
Production / hosted
// you might want to enable/disable this one as you come across errors
//define('WP_DEBUG', true);

// stop external http connections, speeds up loads
define('WP_HTTP_BLOCK_EXTERNAL', true);

/*
 * Disable the 'virtual cron'. For scheduled posts etc, instead use actual cron. see:
 * http://www.inmotionhosting.com/support/website/wordpress/disabling-the-wp-cronphp-in-wordpress
 */
define('DISABLE_WP_CRON', 'true');

Monday, 1 June 2015

Plex Framework 2.5.0 Plugin Manifest

I’ve found that the documentation on the website doesn’t tie up with the API that is currently included with Plex. So here’s the bits of data I’ve gleaned.
The plugins & directory structure of Plex is based around OSX / iOS Bundle Structures. Each plugin has an Info.plist file which is based off Apple’s one (the documentation is here) with some extra properties. Some of these are documented on Plex's site but some were missing. This is a near complete list of them all and what I could work out their purpose to be.
Keys for Info.plist taken from core.py
CFBundleIdentifier Plugin name
PlexPluginClass Type of plugin, default ‘Content’ (but is ‘Channel’ under bundleservices).
Values from constants.py:[‘Content’, ‘Agent’, ‘Channel’, ‘Resource’, ‘System’]
PlexPluginTitle The display name
PlexPluginIconResourceName Default: 'icon-default.png'
PlexPluginArtResourceName Default: 'art-default.jpg'
PlexPluginTitleBarResourceName Default: 'titlebar-default.png'
PlexPluginDevMode 1 or 0. If set to 1, don’t auto update
PlexPluginCodePolicy ‘Standard’ or ‘Elevated’ also ‘cloud’, ‘model’, ‘service’, ‘unpickle’. Default: ‘Standard’
PlexPluginModuleWhitelist optional, only seems to apply to ‘Could’ PluginClass’
PlexPluginAPIExclusions Name of variables/types to exclude from the __main__ globals
PlexPluginConsoleLogging 1 or 0. Whether it should log to the console, handy if you’ve manually started it up.
PlexPluginLogLevel Default is ‘Debug’
PlexAudioCodec optional supported audio codec
PlexVideoCodec optional supported video codec
PlexMediaContainer optional supported media container (file type)
PlexMinimumServerVersion optional
PlexFrameworkFlags ['SystemVerboseLogPeerService', 'UserMyPlexDevServer', 'LogServiceLoads', 'EnableDebugging', 'UseExtendedPython', 'LogMetadataCombination', 'UseRealRTMP', 'LogRouteConnections', 'LogModelClassGeneration', 'LogAllRouteConnections', 'SystemVerboseLogStoreService']
PlexBundleVersion stores the version of the Framework API, only on the Framework.bundle
PlexBundleCompatibleVersion Defaults to ‘2.0a1’
PlexPluginLegacyPrefix defaults to ‘/plugins/’ + plugin.identifier
PlexClientPlatforms Doesn’t appear to be used
PlexClientPlatformExclusions “We use these later to warn users that the channel is unsupported.”
PlexFrameworkVersion Target framework api version to load. set to 2
PlexRelatedContentServices Doesn’t appear to be used
PlexSearchServices
PlexURLServices
The latest version of the Framework as of writing is 2.6.2. I'll regenerate and post some stubs I created in a later post. These stubs allow the python code to compile in Eclipse without errors and also allows autocomplete in the python editor.

Friday, 3 April 2015

Repairing Windows Tools & Articles

For those times when we all think that it surely must be a simple thing to fix an annoying windows problem your facing than reinstalling or for those times when you forgot to take a recent full system image:

Here is a compilation of tools and articles I’ve used to try and fix some windows registry / component problems.

View ALL the hidden devices in device manager

Ghost/Hidden Network Interfaces

Repairing corrupt files using SFC (Windows 7/8)

Use the System File Checker tool to repair missing or corrupted system files

Repairing corrupt files using SFC (Windows 8 / Server 2012)

Fixing component store corruption in Windows 8 and Windows Server 2012

Resetting the TCP/IP stack

How to reset TCP/IP by using the NetShell utility
Windows 7: Wifi adapter could not bind IP protocol stack to network adapter

Checking USB2/3 connection speed

Verifying USB connection speed (USB 3 or USB 2?)

Comparing registry changes

REGDIFF – compares 2 exports, compares an export and current, sorts and merges.
I found that under Windows 7 64-bit, I needed to export both versions to xml (/XML option) otherwise it threw an error regarding unknown data type. The delta it creates contains false positives, so I found it most useful to export to xml and sort the before and after and just use a text compare tool.

The WinSxS Folder aka Component Store

WinSxS Folder in Windows 7 | 8 explained – this is where windows stores different versions of the core windows components (registry, dll, sys etc files)

Fixing missing PerfMon counters

PerfMon problems - Unable to Add Counters

Tuesday, 24 March 2015

Developing Plex Media Server Plugins

Plex Media Server has a nicely extensible API for writing plugins which are classified into channels, agents and services. However in trying to tweak a plugin I wanted to get working, I’ve discovered that the API that forms the latest Plex Media Server doesn’t match up with the API Reference provided. My next post I’ll give dumps of the runtime API and the differences in the object model – which aren’t too much of a big deal unless your doing audio only plugins (I’m getting a streaming radio station channel working).

There also didn’t seem to be a SDK or toolset for developing the plugins, which upon looking at the plugin bootstrap procedure I can understand why – a lot of the classes are dynamically generated and together with some other imports, are set as global variables on the plugin’s executed __init__.py

I spent a bit of time trying to get PyDev to compile using some hackery but the way I was trying to do it I hit too many hurdles. I was trying to add the globals that are exposed to the plugin at runtime to the same modules so that the framework PyDev uses to pick them up. I went down some dead-ends on that and eventually discovered that PyDev doesn’t seem to use the python-c process it kicks off in order to look up the global variables.

I did however get some launch scripts working, both from the command prompt and within PyDev. It was was part reverse engineering, part trail and error. The plugin can be located anywhere. To make the paths shorter, I moved the local application data using the Web settings –> General (Advanced Settings) from “C:\Users\[user]\AppData\Local\Plex Media Server” to “C:\PlexData\Plex Media Server” .  My install directory is C:\PlexInstall.

Running from the command line

Here is the contents of my C:\Development\MyPlugin.bundle\run-plugin.cmd It allows you to test that it compiles using the bootstrap but none of the http wiring and plugin data needs to be configured so it won’t serve requests.

@echo off
set PLEXHOME=C:\PlexInstall
set PLEXLOCALAPPDATA=C:\PlexData
set CURDIR=%CD%
set PYTHONPATH=%PLEXHOME%\python27.zip;%PLEXHOME%\Exts
set PYTHONHOME=%PLEXHOME%

"%PLEXHOME%\PlexScriptHost.exe" "%PLEXLOCALAPPDATA%\Plex Media Server\Plug-ins\Framework.bundle\Contents\Resources\Versions\2\Python/bootstrap.py" "--log-file=%CURDIR%\logs\plugin.log" "%CURDIR%" %*

Setting up PyDev Environment

During the process of writing this up I discovered that when I had thought that I had it working using PlexScriptHost in PyDev, it was using regular Python. I wasted a fair amount of time discovering that I didn’t have it working (as far as I can tell). There’s a bug in PyDev where if you have a launch configuration that uses a different interpreter to what the project is configured with, it will revert back to the project’s interpreter when you change certain settings but stay the same other times. So take it from me, don’t waste your time trying to use PlexScriptHost.

I had configured as my interpreter Python 2.7 for win32, with the following PYTHONPATH (configured from windows->Preferences->Interpreters->Python):

  1. C:\PlexData\Plex Media Server\Plug-ins\Framework.bundle\Contents\Resources\Platforms\Shared\Libraries
  2. C:\PlexData\Plex Media Server\Plug-ins\Framework.bundle\Contents\Resources\Versions\2\Python
  3. C:\PlexInstall\DLLs
  4. C:\PlexInstall\Exts
  5. C:\PlexInstall\python27.zip
  6. C:\PlexInstall
  7. C:\PlexData\Plex Media Server\Plug-ins\Framework.bundle\Contents\Resources\Platforms\Windows\i386\Libraries

To make the launch config easier to manage it best to link in the location of bootstrap.py, so right click on your plugin project and select

  1. New —> Folder
  2. Call the folder “Framework-2-Python”
  3. Click Advanced, Link to alternate location
  4. Type in C:\PlexData\Plex Media Server\Plug-ins\Framework.bundle\Contents\Resources\Versions\2\Python
  5. Click finish

For the launch config:

  1. Right click the project, select Run As –> Run configurations…
  2. Select Python Run then click new
  3. Next to main module browse to the bootstrap.properties in the previously created folder. i.e. ${workspace_loc:MyPlugin.bundle/Framework-2-Python/bootstrap.py}
  4. Under arguments enter something similar to this (changing the name of your project): --log-file=${workspace_loc:/MyPlugin.bundle/logs}\plugin.log ${workspace_loc:/MyPlugin.bundle}
  5. Set working directory to C:\PlexInstall
  6. Go to environment add: PYTHONPATH=C:\PlexInstall\python27.zip;C:\PlexInstall\Exts PLEXLOCALAPPDATA=C:\PlexData

The PLEXLOCALAPPDATA is only required if you moved the application data from c:\Users\[user]\AppData\Roaming. That should do it.

I’ve got a sitecustomize.py I’ve been working on to get rid of the compile errors in PyDev. Its a hack at the moment as they’re just stub classes, a better way would to be to generate python source code from the global variables / classes of the runtime plugin environment.

This was all done with the following versions: Plex Media Server v0.9.11.7.803-87d0708 Framework.bundle 2.5.0, “Mon Jul 28 12:19:14 UTC 2014

Tuesday, 24 February 2015

Diagnosing MQ client and resource adapter problems via trace logging

Standalone tracing WebSphere MQ classes for JMS applications

It’s a relatively simple process to enable logging for MQ JMS client libraries.
1. Add a reference WebSphere MQ classes for JMS configuration file.
Here is the command line arguments to reference the common services properties files called “mqjms.properties” for MQ.
Note: the location for the MQ classes for JMS is a uri (i.e. prefix it with file:) whilst the location for MQ classes for Java is a regular path and annoyingly it won’t give you an error if it can’t correctly read the file.
Also included is a reference to a java keystore and turning on java.net logging for SSL and connection handshaking as getting MQ and SSL working seems to be a constant battle, with cipher suites being the most common cause. I even had trouble trying to use the same ssl/cipher from WLP in both standalone JMS and in the non-JMS setup. It seems to require some trial and error for each of the 3 ways of using MQ listed here.
-Dcom.ibm.msg.client.config.location=file:mqjms.properties
-Djavax.net.ssl.keyStore=certStore.jks
-Djavax.net.ssl.keyStorePassword=password
-Djavax.net.debug=ssl,handshake
2. Add the settings for Tracing WebSphere MQ classes for JMS applications into the configuration file.
mqjms.properties:
com.ibm.msg.client.commonservices.trace.status=ON
com.ibm.msg.client.commonservices.trace.outputName=logs/mq-jms-trace.log
com.ibm.msg.client.commonservices.trace.level=8
# This setting below seemed to have no effect
# com.ibm.msg.client.commonservices.trace.include=com.ibm.mq.jmqi.remote;com.ibm.mq.jms;com.ibm.msg.client.wmq

Tracing WebSphere MQ resource adapter inside an application server – websphere liberty profile

This is probably the easiest to set up, you can either set it up using properties set on the ResourceAdapter similar to above as documented here: Tracing the WebSphere MQ resource adapter. Or you can use a finer grained control over packages' logging and level using websphere's tracing framework: WLP Logging and Trace
For WLP it's a simple as adding these lines to the server instance's bootstrap.properties:
# enable jav.net level logging
javax.net.debug=ssl,handshake
# default value
com.ibm.ws.logging.trace.file.name=trace.log
com.ibm.ws.logging.trace.specification=*= INFO:com.ibm.ws.mq.*=DEBUG=enabled:com.ibm.mq.*=DEBUG=enabled:WMQ.*=DEBUG=enabled:com.ibm.msg.client.wmq.*=DEBUG

Tracing WebSphere MQ classes for Java applications

If you still using the 'old' MQ for Java (which is useful if you want to barebones access to MQ), there are 2 ways to enable trace logging. As documented in Tracing MQ for Java, you can use the MQEnvironment class which is handy if you want to only long a few specific operations:
MQEnvironment.enableTracing(1);   // start trace
 ...                              // these commands will be traced
MQEnvironment.disableTracing();   // turn tracing off again
Or using the commonservices properties file by using adding the command line:
-Dcom.ibm.mq.commonservices=mqjava.properties
Having a file mqjava.properties in the current directory:
# Base WebSphere MQ diagnostics are disabled 
Diagnostics.MQ=disabled  
# Java diagnostics for the WebSphere MQ Java Classes are both enabled
Diagnostics.Java=wmqjavaclasses
# High detail Java trace
Diagnostics.Java.Trace.Detail=high
# Java trace is written to a file and not to the console.
Diagnostics.Java.Trace.Destination.File=enabled
Diagnostics.Java.Trace.Destination.Console=disabled  
# Directory for Java trace file
Diagnostics.Java.Trace.Destination.Pathname=logs
# Directory for First Failure Data Capture
Diagnostics.Java.FFDC.Destination.Pathname=logs\\ffdcdir
# Directory for error logging
Diagnostics.Java.Errors.Destination.Filename=logs\\wmq-errors.log

A common gotcha – cipher specs

Cipher specs always seem to cause the most issues, this is a list of MQ CipherSpec to JSSE CipherSuite for 7.5.0. With this patch list added support for non-IBM JDK's. There a discussion here about the support.
The only one I managed to get working was this sslCipherSuite using Oracle JDK 1.7.0_51: SSL_RSA_WITH_3DES_EDE_CBC_SHA

Tuesday, 3 February 2015

Keeping a windows process after a jenkins job

One of our jobs on Jenkins is to deploy and startup an application server on a remote slave. However we were having 2 problems with it:
1. the job wasn’t finishing
2. when we terminated it from jenkins, it killed the process it had spawned (the application server).

I spent ages butting my head against Jython on windows, which in hindsight, I could have saved a lot of time if I'd seen how little of the functionality I need from python/windows was implemented in jython/windows.

(1) was solved by getting the start command to write stdout/stderr to a file (irrespective of whether anything was written).

(2) I tried many elaborate solutions, still thinking it was a jython or windows related problem. After seeing it mentioned in blog posts a few times I finally got what they were saying.
Jenkins has a section on their site: Spawning processes from build. Now the key thing here I missed was the BUILD_ID environment variable. So unset it and as long as you've spawned a new process it should fine.
The simple solution should be start_background.bat
set BUILD_ID=
start /B "" cmd /C %*
And just pass your full command to start_background.bat

Unfortunately that didn't work in my odd case, so here's my overkill solution for those that need it:
# Use csscript.exe to trigger the call async and not attached to this process as os.spawnl(command, os.P_DETACH)
    def windowsAsync(self, systemCommand, jobName):
        cmd = open(jobName + '.cmd', 'w')
        cmd.write("set BUILD_ID=\n")
        cmd.write(systemCommand + "\n")
        cmd.write("exit\n")
        cmd.close()
        startCommand = 'start "" /B cmd /C ' + jobName + '.cmd'
        # saving the output to a temporary file to make sure we don't have handles. don't know if this acutally does it
        tOut = tempfile.NamedTemporaryFile()
        tIn = tempfile.NamedTemporaryFile()
        print "Using output temp file: " + tOut.name
        print "Invoking OS command: " + systemCommand
        print "Embeded command: " + startCommand
        process = subprocess.Popen(startCommand, shell=True, stdin=tIn, stdout=tOut, stderr=tOut)
        #process.wait()
        print "Sleeping"
        time.sleep(30)
        
        print "--- consoleOutput start:"
        # Rewind and read the text written
        # to the temporary file
        tOut.seek(0)
        lines = tOut.read()
        print lines
        tOut.close()
        sys.stdout.flush()
        tIn.close()
        
        lines = ""

Friday, 16 January 2015

Low nproc limit prevents sudo'ing to another user

A while ago I posted a solution to OutOfMemoryError: unable to create new native thread, however I encountered a problem caused by the nproc setting but wasn't fixable by a simple ulimit command. The problem we were encountering was that we couldn't sudo into service account when we had all our application servers running.

Our sysadmins have setup a user escalation script, which there's nothing wrong with. It does some prechecks, sudos a script under the requested user, does some logging, then does an exec <configured shell> . When there are too many processes, it manages to run the 2nd script as the user, however the exec command blocks, it never gets to the profile script to execute the ulimit command.
I traced it down to the default soft limit for the number of process for all users to 1024.
Our process count is way above that, which means when it tries to create a new process for bash when we run the sudo script, it can’t. So we are unable to sign into the account.

As the profile scripts never get run, this one can't be solved at the user level.
I started doing some searching around, in /etc/security/limits.conf we had:
*       -         nproc     31768
I found if I specifically added a user to that file then it would work, i.e.:
serviceaccount     -      nproc     4096

This serverfault/stackexchange disccussion highlights the problem. This redhat bug request from 2008, shows that they requested this file /etc/security/limits.d/90-nproc.conf be added with this setup:
*        soft      nproc     1024
root     soft      nproc     unlimited
Which shows where it comes from and why manually specifying it in /etc/security/limits.conf worked (the more specific rule won).
The soft limit is 1024 for all non-root users but their hard limit is 31768, which means it's initially limited to 1024 until a shell raises it's own limit. All other processes that were started without a raised limit are unable to create more processes, including our bash shell invoked during our sudo script.

So for power users' systems you'd need to change /etc/security/limits.conf to at lest 4096, Oracle apparently recommends this for java. For a service account with multiple app servers 8192 is probably safest.

In our profile script I output the unlimit and the current process count so we know if we're getting close to the limit.