Tag Archives: computer

Replace Text Recursively using grep and sed on MacOS (osX).

As with many MacOS’isms, you have to change up your old school LINUX to work with the slightly different command syntax of the MacOS tools. Specifically in the this case xargs and sed.

xargs can be an amazingly powerful ally when automating commands line functionality, wrapping the reference command with auto-substitutions using the ‘@’ meta-character.

I’ll get to the meat of this. While trying to automated a grep -> sed pipeline, I encountered this error:

egrep -rl 'version 0.0.1' src/* | xargs -i@ sed -i 's/version 0.0.1/version 0.0.1c/g' @
xargs: illegal option -- i

It turns out that MacOS xargs like -I instead of -i.. (a quick trip to man sorted that out).

     -I replstr
             Execute utility for each input line, replacing one or more occurrences of replstr in up to replacements (or 5 if no -R flag is specified) arguments to utility
             with the entire line of input.  The resulting arguments, after replacement is done, will not be allowed to grow beyond 255 bytes; this is implemented by con-
             catenating as much of the argument containing replstr as possible, to the constructed arguments to utility, up to 255 bytes.  The 255 byte limit does not apply
             to arguments to utility which do not contain replstr, and furthermore, no replacement will be done on utility itself.  Implies -x.

Next I ran into an errors with sed.. when using the ‘edit in place’ flag -i. (yes, that makes for a confusing debug when you have the “offending” switch in two places).

egrep -rl 'version 0.0.1' src/* | xargs -I@ sed -i 's/version 0.0.1/version 0.0.1c/g' @
sed: 1: "src/com/ingeniigroup/st ...": bad flag in substitute command: 'a'
sed: 1: "src/com/ingeniigroup/st ...": bad flag in substitute command: 'a'
sed: 1: "src/com/ingeniigroup/st ...": bad flag in substitute command: 'a'
[...]

With a little testing of the individual command, and another trip to man, it took a little noodly to deduce that the error message was being generated because it was trying the sed commmand as the output file suffix and the filename as the sed command! Adding an empty string designator after sed’s -i solved this.

     -i extension
             Edit files in-place, saving backups with the specified extension.  If a zero-length extension is given, no backup will be saved.  It is not recommended to give
             a zero-length extension when in-place editing files, as you risk corruption or partial content in situations where disk space is exhausted, etc.

Final Command Query
The objective was to simply hammer through the source code and update the version tag from 0.0.1 to 0.0.1c. Running egrep in recursive filename list mode ( egrep -rl ) for the string I wanted ( ‘version 0.0.1’ ) gave me a file list, which was then piped into xargs, which expanded that list into the sed command ( sed -i ” ‘s/version 0.0.1/version 0.0.1c/g’ ). And viola.. 18 files changed with just a little big of effort:

egrep -rl 'version 0.0.1' src/* | xargs -I@ sed -i '' 's/version 0.0.1/version 0.0.1c/g' @

Since this took me more than 5 minutes to figure out, I decided I’d take 5 more and hopefully help someone else down the line.

Install Redis on AWS EC2

redis-whiteRedis is fairly simple to install and get running. I found the best way to do this on CentOS based AWS EC2 nodes is to use the following steps.

Install Pre-Requisites

Redis will require several per-requisits. Your system may vary, but these are the cases I ran into when running the build in August 2015 with the latest AWS system updates. Some of these are required to run the tests, others are required for Redis itself.

TCL 8.5 or higher for Test

You need tcl 8.5 or newer in order to run the Redis test

yum install tcl

Download latest Redis package

Assume super user, move to a safe directory (I like /usr/local) and download the latest build:

sudo su –
cd /usr/local
wget http://download.redis.io/redis-stable.tar.gz

Extract Files

Once the main tarball has been downloaded, extract the files and start the configuration process.

tar xvzf redis-stable.tar.gz
cd redis-stable

Build the Binary

Build the binary. Redis does not seem to require ./config to be run, the necessary make files are already in place. Just run make and install!! If you decide to run the ‘make test’ (which I suggest you do), it maybe take 10-15 min. to complete depending on the power of your AWS instance.

make
make test
make install

Set Overcommit to TRUE

Redis is going to complain unless you have some level of overcommit memory enabled. This is easy to do (again, you must be root or sudoer to do this). Add ‘vm.overcommit_memory = 1’ to /etc/sysctl.conf and then reboot, IF you can safely do so on your machine (best to check and make sure there are no live service interruptions or other personnel using the system).

vi /etc/sysctl.conf

Add this to the end of the file:

# Required by Redis to enable overcommit setting:
vm.overcommit_memory = 1

Reboot

init 6

Configure Redis

Create a working directory for the redis disk files. I like to use the following:

mkdir /var/redis
mkdir /var/redis/db

Copy the base configuration file to /etc/ and customize to your environment.

mkdir /etc/redis
cp redis.conf /etc/redis/6379.conf
vi /etc/redis/6379.conf

I made the following changes to the configuration file. I can’t guarantee all or any of these will be correct for your configuration:

daemonize yes

bind 127.0.0.1

tcp-keepalive 60

logfile “/var/log/redis-server.log”

dir /var/redis/db

Copy the startup file into /etc/init.d

cp utils/redis_init_script /etc/init.d/redis

Add the start command to the root’s crontab. Yeah, so this might be a cheater method instead of adding this to the systems rd.X files, but it’s also easy to disable.

crontab -e

@reboot /etc/init.d/redis start

Start Redis Server

Starting the server from the command line is a good way to verify it’s functional. It’s easy to do, just type ‘resis-server’. Hit CNTL-C to kill and exit once you’ve tested launch. If it starts up, you should see something like this:

[root@ip-10-000-000-00 redis-stable]# redis-server

31408:C 04 Aug 21:55:00.578 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
31408:M 04 Aug 21:55:00.579 * Increased maximum number of open files to 10032 (it was originally set to 1024).

31408:M 04 Aug 21:55:00.581 # Server started, Redis version 3.0.3

31408:signal-handler (1438725473) Received SIGINT scheduling shutdown…
31408:M 04 Aug 21:57:53.628 # User requested shutdown…
31408:M 04 Aug 21:57:53.628 * Saving the final RDB snapshot before exiting.
31408:M 04 Aug 21:57:53.631 * DB saved on disk
31408:M 04 Aug 21:57:53.632 # Redis is now ready to exit, bye bye…

If that looks OK, then start using the startup back file. This should start redis as a deamon (service) depending on how you edited the configuration file. If you did it the way I did, then it will start as a deamon.

/etc/init.d/redis start

Starting Redis server…

Test to make sure it’s listening.. by using the ping command. If it’s alive and listening, you’ll receive back a ‘PONG’

redis-cli ping

PONG

FINAL STEPS — Reboot and Verify!

A good and proper final test, assuming you are able to reboot the system without causing trouble to any live services or other personnel… is.. REBOOT, then verify that it has restarted as expected.

init 6

Connection closed by remote host.

[ec2-user@ip-10-000-000-00 ~]$ redis-cli ping

PONG

CONGRATULATIONS!! You are now the proud owner/maintainer/RP of a Redis server!

NEXT…

Doing something productive with Redis… (to be continued)

Digging through the past – my early programming.

I have been on a mission this year to simplify my life. Part of that includes disposing of once ‘possibly useful’ stuff. One such pile of ‘possibly useful’ included over 200 3.25″ floppy discs, circa 1995.

Not wanting to throw away anything that might be useful, I had to see what was on those disks. The original plan was to use an old PC that I used for LINUX development (and has a 3.25″ drive) to mount the disks and read the contents. That plan slowly devolved into a realization that things would be not be that simple. After trying to mount several disks, only to have the mount time out and fail, I came the the realization that I’d have to find a USB floppy disk drive to plug into my MAC, or (gak.. gag.. snarl) I’d have to construct a Windoze box to read those disks. As much as I found the entire concept utterly revolting, it was my only sane solution. If anyone could possibly consider intentionally installing any Micro$oft operating system as ‘sane’. So be it, that was the goal for a Friday night.

Having access to a Dell Optiplex mini-case Pentium III machine (that I bought at a computer store across town for $25 about 2 years ago), with a 3.25″ disk I decided to give it a try. Sadly, the 10GB disk drive inside was dead… so, I needed to find another old ISA drive. Such antiques are not so easy to find, EXCEPT, in my collection of stuff that would be ‘possibly useful’. This consisted of a stack of 8 drives in size from 8GB to 500GB in size.

Next hurdle was the OS. What was I going to install? Clearly a Pentium III is NOT going to run Win7, or XP for that matter. I’d have to find an OLD OS. So, that’s what I did. Being the pack rat I am, I found my old MSDN (yes, you read that right, back in the 90’s I was a registered Windows developer.. more on that below). I no longer had the full 30 DC catalog of stuff, but I did, for some reason, retain my Windoze NT MSDN install image with developer license. This… was going to be my conduit to the archive of my programming history.

I’ll save you the 1 day odyssey of dealing with the ancient and inexplicable Windoze limitations on hard drive size. Even with the cylinders and heads and a translation formula.. I ended up with only one drive that would work. And to top it off, despite Windoze complains that the max Partition size would be about 8 GB (my iPhone has more storage!), that MAXIMUM partition size the NT would install on was a 2048MB section of disk. This discovery through trial and error cost me several hours and I’m guess at least 1 year off my life.

Finally, I was able to start going through the disks. This is when I came to the realization that my LINUX box just *might* have been able to read the disks all along. Out of the 200 or so disks, only a handful were usable. Most were either un-formatted, or so badly degraded that the CRC errors (you remember CRC errors… I do, except now I’ve suddenly developed a slight tick.. the costs of Windoze can’t be measured in dollars alone.. oh now.. oh no indeed.). Odds are, ever disk I tried to use was junk anyway. This is something I need to follow up on later this week.

In the end, the Dell Optiplex Pentium III was brought to live running Windoze NT. I nightmare of an OS if ever there was one. In fact, I recall in my 1st stint at Hewlett-Packard (mid 90’s) we were forbidden to even mention Windoze NT in our workgroup. The fear of NT vs. HP-UX 9.0 in the market place was great. In retrospect, the comparision is laughable. HP-UX was a real UNIX system, with a real UI, Windoze was.. well.. garbage. The really sad part was that NT won the market share war. There is no accounting for intelligence within IT management.. this is an axiom proven again and again. Digressing….

Most of the stuff I found on the floppies that I could read was worthless. I did have some old 90’s website content that some day I might pull off and do a way-back machine sort of look at my very early web development work when Netscape 1.0 was king and there was no such animal as Internet Exploder. Really, and sadly, I suspect many people cannot even image that… tsk tsk.. sorry for you, I truly feel).

I did manage to find one good working bit of code, and I’m including a screen shot of it here.

This is a bit of code that I wrote using Borland (10,000 points if you remember Borland and know where their headquarters were… 1,000,000 points if you have a photo of it, that you took!) Borland C++. Having had the choice of using the MS Frameworks or buy my own compiler, I bought my own (and it was not cheap I can assure you) full blown compiler. I was even part of their developer and Beta tester network. I had some early releases of BladRunner (a DB precursor to Paradox) in floppy, but I tossed those out during the purge. Shoot.. digressing again….

When I was working at HP, we had a set of very crude batch scripts that performed the very critical task of monitoring our telemarketing PC’s located across town in Santa Clara (the data center where I worked was in Sunnyvale). The batch script ran on a dedicated PC in our data center. It was a fragile concern at best. While working grave yard baby-sitting multi-million dollar equipment, I took it upon myself to take some C and C++ programming classes at the local college, and use my time in the data center to get my homework done. While writing code, it was quite clear to me that this fragile batch file system would be released with a proper Windows program, that would be more robust, provide more information and not require the PC to site there running a single program that would crash if anyone touched the keyboard (it would interrupt the batch file.. whom ever came up with the idea should have been fired on the spot!)

I took a couple of nights to re-write the entire thing in C++, as a Windows application. Here is a picture of the screen with the ‘About’ dialog. Since the constellation of PC’s it monitored were of course not on my home network (there were tossed in the trash at HP around 1997), there was not much else to show.

Telos Vision - Download Monitor

It sure did bring back some memories and let me to think back about the extensive amount of programming experience I have on a wide variety of platforms and languages… sometimes I get so focused on my current objectives, I forget how much programming experience I bring to the table, well over 20 years worth.

I also found a couple of command line and basic text windowing programs that I actually sold for $6.00 a copy back in the late 80’s. Yes, I’ve been writing and selling software for nearly 30 years. Yes, you read that right 30 years. It’s even hard for me to think about that.

Thinking back, to my first experience with a computer, it was at UC Berkley back in 1976. At that time, the computers were all timeshare machines, and time on them was not cheap. There were no monitors, or terminals. Interaction with the computers was via binary status boards (recall all those blinky lights on computers in old movies.. yeah.. that’s what I’m talking about) and the rare interaction with teletype style printer terminals. People allowed to directly interact with computers were highly trained individuals, no mere mortal was allowed to come in contact with a computer. Quite the contrast to today, where even those with the least of ability are allowed to not only touch them, but they can OWN then! And worse yet they are allowed to connect them to world wide networks. The concept of this so foreign and frightening to those early computer scientists.. the thought of it every happening was simply an impossibility. Who would be so stupid to allow that sort of thing to happen? Well, I think that history on that is documented well enough I need not even attempt to cover it here.

Now, I wonder if I can fix those disks with the CRC errors? Where is the old dial-up BBS that kept that fixdisk.exe. program I loved so much?

Massive Chinese cyber espionage network discovered

Researchers in Toronto released a report this weekend, regarding the discovery of a massive cyber-espionage and data theft network that appears to have 3 of it’s 4 Command-and-Control (C&C) located in China.

Vast Spy System Loots Computers in 103 Countries
By JOHN MARKOFF
Published: March 28, 2009

TORONTO — A vast electronic spying operation has infiltrated computers and has stolen documents from hundreds of government and private offices around the world, including those of the Dalai Lama, Canadian researchers have concluded.

Link to full New York Times article

Details of the exploit vector are exactly spelled out in the article, but it would appear that this software infection of computers capable of monitoring email and other traffic.  By description, it sounds like the malware/trojan/crimeware employs a network sniffer to watch traffic I/O on the infected machine, sending interesting data back to one (or more) of the C&C systems.  The researchers also indicated that they stumbled upon some of this by accident, and there could be other capabilities of the network not yet exposed.

I plan to look into this further to see what types of systems have been infected.