Beagleboard xM 1 GHz operation – safely and reliably

Over the past couple of days, I’ve been reading on the beagleboard.org blog about the 3.11-rc2 kernel coming out and that it contained a new driver, ti-abb-regulator (adaptive body bias), which allows the omap chips to adjust their operating voltages for different operating frequencies.  Combined with the smart reflex class 3 driver (which has been in the kernel since 3.6), allows for the safe operation of the beagleboard xM at 1 GHz – finally.  Its been a long time coming since the last time it was safe was kernel 3.0.28 back in early 2012.  The process of enabling 1 GHz operating comes in 4 parts.

First, since the TI abb driver bindings are for the device tree based boot only, we need to bring in some resources for the device tree (https://github.com/Teknoman117/beagleboardxm-kernel/blob/v3.11.x/patches/drivers/0005-ARM-dts-omap-clock-bindings-driver.patch).  The abb driver requires a reference to the system clock, as in hardware that what drives it.  I began to code one up myself, and then looking for information on that, I stumbled into a post from April 2013 (http://lkml.indiana.edu/hypermail/linux/kernel/1304.1/04079.html) containing such a driver.  So I pulled that resource in which provides the ability to bring in the OMAP clocks references into the device tree.  Sweet!

Second, we dive into the device tree.  We now need to add the system clock binding to our definition of the omap3 CPU (https://github.com/Teknoman117/beagleboardxm-kernel/blob/v3.11.x/patches/omap/0014-ARM-dts-omap3-add-clock-bindings-to-dts.patch).  I just created references to the required clock for the system clock, and then according to this post (http://lkml.indiana.edu/hypermail/linux/kernel/1304.1/04074.html) one needs to add a reference to the CPU dpll1 clock for the cpu frequency driver.  Okay!

Third, we need to modify the power management startup for the omap processor (https://github.com/Teknoman117/beagleboardxm-kernel/blob/v3.11.x/patches/omap/0015-ARM-dts-omap-boot-support-cpu0-cpufreq.patch).  By default, it will only load a cpu-freq driver when performing a non-dts based boot.  This is a problem.  So we tell the power management driver to always initialize the cpu-freq system and modify the initialize function to load the legacy cpu-freq driver when performing a non-dts boot and to load the new cpu0-cpufreq SoC generic driver if performing a dts based boot.

Fourth, we need to add the abb bindings for the beagleboard xm into omap3-beagle-xm.dts (https://github.com/Teknoman117/beagleboardxm-kernel/blob/v3.11.x/patches/omap/0016-ARM-dts-omap3-beagle-xm-add-opp1g-abb-bindings.patch).  This consists of two modifications, 1) adding the ti-abb-regulator driver and 2) adding the frequency and core voltage values for OPP1G, the 1 GHz operating point of the OMAP36xx/OMAP37xx CPUs.  After this modification, the beagleboard xM can boot supporting 1 GHz under the device tree based boot.

Screenshot of the cpu-freq info for these patches

beagleboard_xm_1ghz

These patches have been merged into the https://github.com/RobertCNelson/armv7-multiplatform v3.11.x branch, so to build a kernel, checkout the v3.11.x branch

nathaniel@Sedenion:~> git clone https://github.com/RobertCNelson/armv7-multiplatform.git

nathaniel@Sedenion:~> git checkout origin/v3.11.x -b v3.11.x

and then follow the standard build instructions provided in the README.

After you build the kernel, you need to modify the uEnv.txt file to enable the dts based boot.  At the bottom of  uEnv.txt, comment out this line

uenvcmd=run boot_classic; run device_args; bootz 0x80300000 0x81600000:${initrd_size}

and uncomment this line.

uenvcmd=run boot_ftd; run device_args; bootz 0x80300000 0x81600000:${initrd_size} 0x815f0000

When I was messing around, the bootloader seemed to fail to detect the dtb to use for the current board, so you can force it by adding this line at the beginning of the file

 fdtfile=omap3-beagle-xm.dtb

There is still an annoying issue currently, not with the operating frequency, but with the USB ports on the board.  The sprz319 erratum patch has not yet been ported to 3.11 yet, so until I finish that, not stable usb ports.  It probably will take a few hours, but it shouldn’t be so hard.  Happy coding!

Edit: I have ported the sprz319 erratum patch to the beagleboard xM – https://github.com/Teknoman117/beagleboardxm-kernel/blob/v3.11.x/patches/omap_sprz319_erratum_v2.1/0001-hack-omap-clockk-dpll5-apply-sprz319e-2.1-erratum-kernel-3.11-rc2.patch.  Make sure to uncomment it in patch.sh before running build_kernel.sh.  It is disabled by default because it breaks support for the older Beagleboard (not xM) series.

Using python as a shell

I have absolutely no idea if this is useful for anyone (embedded systems?), but I was creating a user on the new OS image i’m using for my Beagleboard xM and wondered what would happen if I specified python as the login shell for a user.  So anyway, it worked, and you get the python shell when you log into the system.

python_user_create

Work to bring the Beagleboard xM to full frequency under Linux 3.2.28 and beyond.

Well the owners and fans of the Beagleboard xM have a bit of a dilema when it comes to picking a version of Linux to run on their boards.  Many of you, including myself are quite annoyed that the big dog is clocked down when running a current version of Ubuntu or Angstrom.  For instance, the other day I installed the latest build of Ubuntu 12.04-r5 for armhf on my board and was quite unhappy to learn that the maximum frequency was capped at 800 MHz with the board being advertised as delivering a 1 GHz arm computer.  The SD card that ships with the board allows for this, but it uses an older version of the Linux kernel, 3.0.x.  The current stable is kernel 3.5.3.  Well, I don’t know if its just me, but I want to squeeze every last instruction per second that it will allow me.  So upon searching the internet for days, I came across this one blog:

http://blog.galemin.com/2012/03/buildroot-2012-02-for-beagleboard-xm-with-li-5m03-mt9p031-camera-support/

It provides a custom buildroot setup that for some camera hardware directly attached to the Beagleboard-xM’s camera capture lines.  But the thing I noticed mainly about it when I tried it was that the board was operating at 1 GHz, and on kernel 3.2.7.  So upon further inspection I found the patches that they were using to increase the top frequency.  So I forked Robert Nelson’s stable kernel repository, which can build one of the latest stable kernels, and I’ve personally tested it for Ubuntu and Angstrom.  It also includes the patches for the xM which keep the USB ports from shutting down randomly.  I integrated the patches into my fork and modified them a bit to fit into the later kernel, and after a little wait for building, I had Ubuntu 12.04 for armhf operating with the 3.2.28-x14 kernel.  You can find my repository for this at: (edit: Kybernetes uses the 3.5.7 kernel now, the repo reflects this update)

https://github.com/Teknoman117/kybernetes-kernel

However, before you get all excited, its not quite done yet.

Note #1 – Don’t use this on a Beaglebone or original beagleboard.  There is this patch enabled (0002-Fix-sprz319-erratum-2.1.patch) which fixes the xM’s usb problems.  However is will cause the other beagleboards not to boot at all.

Note #2 – I talked with Robert Nelson about my patches and without further work they could potentially damage your Beagleboard xM.  The automatic voltage scaling (Smart Reflex) does not work properly for the OMAP 3 on recent linux kernels, at least for the beagleboard.  The people over at TI and the people working on Angstrom have patches for this that make it completely safe to run at 1 GHz, however the patches are for Linux 3.0.x and there are over a hundred of them and many are hundred of lines long.  Some of them will probably patch into 3.2 properly, and some will definitely not.   So before this works, I am going to have to figure out what works and what doesn’t and integrate this into the kernel.  I will do this for kernel 3.4, because they have completely changed the AVS system in 3.6 entirely.  So if I am going to do this work, then I will do it for the current kernel version because otherwise it would be a waste.  So keep following because I am taking this on!

– Teknoman117

C++ Plugins with Boost::Function on Linux

Over the past few weeks, one of the concepts that I’ve been experimenting with is plugin architecture.  The idea of having a core application which can be extended by shared objects without recompiling the core program.  Or, possibly, a way of defining more services in an application based around plugins.  What I’ve done so far hasn’t been much, but, I haven’t spent much time working on it.  When I started out on researching it, one of the things I wanted was to be able to was define a C++ object in a plugin and be able to instantiate that object in the main program.  So, this is what I have come up with.  There are three parts of this solution: the plugin class definition in plugin.hpp, the plugin in someplugin.cpp, and the loader code in loader.cpp.  I’ve also included a CMakeLists.txt file to compile it with cmake.

plugin.hpp

#ifndef _PLUGIN_HPP_
#define _PLUGIN_HPP_

#include <string>

namespace plugins
{
  class Plugin 
  {  
  public:
    virtual std::string toString() = 0;
  };
}

#endif

awesomeplugin.cpp

#include "plugin.hpp"

namespace plugins
{
  class AwesomePlugin : public Plugin
  {  
  public:
    // A function to do something, so we can demonstrate the plugin
    std::string toString()
    {
      return std::string("Coming from awesome plugin");
    }
  };
}

extern "C" 
{
  // Function to return an instance of a new AwesomePlugin object
  plugins::Plugin* construct()
  {
    return new plugins::AwesomePlugin();
  }
}

loader.cpp

#include <iostream>
#include <vector>
#include <dlfcn.h>
#include <boost/function.hpp>

#include "plugin.hpp"

typedef std::vector<std::string>             StringVector;
typedef boost::function<plugins::Plugin* ()> pluginConstructor;

int main (int argc, char** argv)
{
  // Assemble the names of plugins to load
  StringVector plugins;
  for(int i = 1; i < argc; i++)
  {
    plugins.push_back(argv[i]);
  }

  // Iterate through all the plugins and call construct and use an instance
  for(StringVector::iterator it = plugins.begin(); it != plugins.end(); it++)
  {
    // Alert that we are attempting to load a plugin
    std::cout << "Loading plugin \"" << *it << "\"" << it->c_str() << std::endl;

    // Load the plugin's .so file
    void *handle = NULL;
    if(!(handle = dlopen(it->c_str(), RTLD_LAZY)))
    {
      std::cerr << "Plugin: " << dlerror() << std::endl;
      continue;
    }
    dlerror();

    // Get the pluginConstructor function
    pluginConstructor construct = (plugins::Plugin* (*)(void)) dlsym(handle, "construct");
    char *error = NULL;
    if((error = dlerror()))
    {
      std::cerr << "Plugin: " << dlerror() << std::endl;
      dlclose(handle);
      continue;
    }

    // Construct a plugin
    plugins::Plugin *plugin = construct();
    std::cout << "[Plugin " << *it << "] " << plugin->toString() << std::endl;
    delete plugin;

    // Close the plugin
    dlclose(handle);
  }

  return 0;
}

CMakeLists.txt

# Project Stuff
cmake_minimum_required (VERSION 2.6)
project (PluginDemo)

# Default Options
add_definitions("-std=c++0x")

# Find Boost
find_package(Boost REQUIRED)
include_directories(${Boost_INCLUDE_DIRS})

# Pull in the project includes
include_directories(${PROJECT_SOURCE_DIR}/include)
set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib)
set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)
set(LIBS ${LIBS} pthread boost_thread rt)

# Build the plugin experiment
add_executable(pluginloader src/loader.cpp)
target_link_libraries(pluginloader ${LIBS} dl)
add_library(awesomeplugin SHARED src/awesomeplugin.cpp)

Basically create a directory with the folders bin, lib, and src. Put loader.cpp, awesomeplugin.cpp, and plugin.hpp in src, and CMakeLists.txt in the directory. Open a terminal and run “cmake . && make”. Run the pluginloader program and pass it the path to the plugin’s .so in the lib folder. Here is the output from my computer.

nathaniel@XtremePC:~/Programming/Experimentation> cmake .
— The C compiler identification is GNU
— The CXX compiler identification is GNU
— Check for working C compiler: /usr/bin/gcc
— Check for working C compiler: /usr/bin/gcc — works
— Detecting C compiler ABI info
— Detecting C compiler ABI info – done
— Check for working CXX compiler: /usr/bin/c++
— Check for working CXX compiler: /usr/bin/c++ — works
— Detecting CXX compiler ABI info
— Detecting CXX compiler ABI info – done
— Boost version: 1.46.1
— Configuring done
— Generating done
— Build files have been written to: /home/nathaniel/Programming/Experimentation
nathaniel@XtremePC:~/Programming/Experimentation> make
Scanning dependencies of target awesomeplugin
[ 50%] Building CXX object CMakeFiles/awesomeplugin.dir/Plugins/awesomeplugin.cpp.o
Linking CXX shared library lib/libawesomeplugin.so
[ 50%] Built target awesomeplugin
Scanning dependencies of target pluginloader
[100%] Building CXX object CMakeFiles/pluginloader.dir/Plugins/loader.cpp.o
Linking CXX executable bin/pluginloader
[100%] Built target pluginloader
nathaniel@XtremePC:~/Programming/Experimentation> bin/pluginloader lib/libawesomeplugin.so
Loading plugin “lib/libawesomeplugin.so”
[Plugin lib/libawesomeplugin.so] Coming from awesome plugin
nathaniel@XtremePC:~/Programming/Experimentation>

– Teknoman117

Nvidia system with 3 monitors

Last year (2011), when I graduated high school, my school was going through a huge overhaul of the campus.  The old campus had been slowly been being destroyed as new buildings had been built.  That said, they also went though a shift in the technological resources.  They decided to write off or just plain junk a significant portion of the old computers (P4 era Dell Optiplex machines) and I happened to obtain two 17″ TFT panel LCD screens.  I bring them home thinking I’ll do something with them in the future.  I happened to be working with a upstart gaming studio, E1FTW Games (http://www.e1ftwgames.com/), that summer (i still am) and I had an iMac on my desk so I did nothing with the monitors.  After I left for college, I used the pair of them with an old eMachines computer that my family had long since forgotten about as my computer when I was at home because my primary (most awesome) computer I had brought with me to college, and resided in my dorm room.  When I finished my first year, I live (am still as of July, 2012) at my parent and I set my big computer back up.  I use a 1080p 23.5″ LCD tv as my primary monitor, but I seriously wanted to use the pair of monitors I had with my desktop.  Much to my dismay, Nvidia GPUs only support 2 monitors per chip, so, even though I had the three monitors on my desk, only the 23.5″ panel worked along with one of the smaller screens.  So, here I was trying to find the cheapest Nvidia gpu I could that would fit into a PCIe x1 slot.  Much to my surprise, they cost more than their PCIe x16 counterparts, something I regard as pretty damn stupid.  So I was searching for one that would fit in the PCI bus.  They were even more than the PCIe x1 cards.  It just wasn’t fair.  So I was lining up to buy a GeForce GT 430 for something like $80 that slotted in a PCI socket.  I was pretty bummed that this was the only solution, but then I had an idea.  PCIe is supposed to be failure tolerant.  If one of the channels goes dead, it just isolates the problem by ignoring the fact that it exists.  So I had a though – could I stick a PCIe x16 card in a PCIe x1 slot and operate a just a 1/16th the bandwith?  Sure enough, there were websites all over the internet that described cutting the end of the PCIe x1 slot off and placing in a PCIe x16 card.  I decided to try it out, and what do you know, it worked.  So here are some pictures.

All the screens loaded up onto my desk

My desktop. It doesn’t look like much, but its my trusty computer.

the eMachines computer that again I’m stripping of its GeForce 7300 gs graphics card

This is one of the most confused pieces of computing hardware I’ve ever owned. Its labeled as a GeForce 7200 GS, but its been identified as either that OR a GeForce 7300 SE (not a typo)

Targeted a PCIe x1 slot for chopping

Both cards, a GeForce GTX 560 2gb (the primary card) and the GeForce 7200/7300 gs installed

Viewing Steam and a test website under Chrome on the monitors plugged into the 7200 gs and running the Nvidia fluids demo on the primary monitor. Fun fact – the fluids demo runs fine on the old cards too, because it uses the GTX 560 to run the physics

Both GPUs identified in Furmark under Windows 7

World of Tanks up on the center monitor and stuff on the sides

All three monitors up and working under OpenSUSE 12.1 with the nvidia 295.59 drivers installed. This is under Xinerama, which I ended up disabling, see below

There was one unforeseen side effect of running the GPUs under Linux (my OS of choice).  I was trying to use Xinerama to make one contiguous display so I could do the awesome extended desktop thing, but alas, it was not to be, considering that I am using two widely varied cards.  The GeForce 7300 card is so old, it was available before Windows XP had any service packs.  It doesn’t even use shading processors.  It can only run one shader script at a time and has vertex, fragment, and geometry units straight on the card – its a DX9 GPU.  The primary card is a GeForce GTX 560.  A card with 8 times the amount of memory that runs 100s of times, 336 CUDA cores, and supports DX11 and OpenGL 4.x.  So compositing did not work and GL was disabled on the displays because it wasn’t compatible with main card, in turn because not all cards had GL, KDE wouldn’t run the effects manager.  This resulted in really slow window operations, the UI was so very laggy.  So I decided to give separate X screen a go.  It works flawlessly.  Windows may be locked to their respective screens, but its not at all bad.  Kwin places new windows in the screen where the mouse is when the application is launched.  Although, i do wish that when I want a new chromium window I could put it in another screen without having to run DISPLAY=”:0.2″ chromium from the console all the time when its already launched in another X screen.  I spend a lot of time in the console though, so its not really too bad.  Beats having only one monitor.  Since I chose to do it this way, OpenGL applications are supported in all the windows and they start by default unless instructed otherwise on the primary screen.  Fullscreen OpenGL applications on the two side monitors are unpredictable and unstable, but just fine on the center screen, driven my the massive GPU.  All in all, its an awesome setup and I love it. Linux had come a long way since its conception and now with Unity3D, a very popular game engine, officially supporting Linux and Autodesk releasing their 3D software for Linux (such as Maya), maybe Windows will start loosing its stranglehold on gaming.

– Teknoman117