Today I learned… there are [at least] two ways to inject a service into Mocha unit tests using the Chai assertion library and Angular mocks. This is just a little thing, but I’ve seen this difference in a few unit testing tutorials and it confused me the first time I came across it.
In my project I have a service called mealsServer. No need to worry about what it does, for now we’re just testing that it gets injected successfully (in other words, exists).
Service Injection Technique #1:
Here I am declaring mealsServer as a variable and then injecting _mealsServer_ using beforeEach:
var mealsServer;
beforeEach(inject(function(_mealsServer_) {
mealsServer = _mealsServer_;
}));
The underscores are an oddity. The underscores are a little syntax trick that make it possible to use the same name for the injection as we use for the variable. In other words, if we didn’t inject _mealsServer_ wrapped in underscores, then var mealsServer would need a different name. I’m all for keeping names consistent whenever possible, so I’m glad I learned about this.
Service Injection Technique #2:
And here’s an alternative: here I am injecting the mealsServer service as part of the it block:
it('should have a working meals-server service', inject(function(mealsServer) {
expect(mealsServer).to.exist;
}));
I’m still learning the ropes of unit testing, so I’m sure there are advantages/disadvantages to each of these approaches. I’m relying a lot on this tutorial: Testing AngularJS Apps Using Karma to get me started.
Personally, I like injecting the service in the same line of code that relies upon it being there. I think this is neater and will hold up better as this file becomes longer.
For reference’s sake, here’s the complete meals-test.js file below. It’s small right now, but just getting to the point of having (any!) tests run successfully was a several hour endeavor. In this version, I am just testing that my services exist and I’m using technique #2 from above.
Tired of restarting your server manually, over and over again, whenever you change something? Straining under the labor of recompiling your Less / Sass dozens of times an hour? No more. Let a robot do it for you. A robot named Gulp.
Why Gulp?
First off, I should mention that there are several build automation tools written specifically for Node.js apps. Grunt is probably the most popular, but there’s also Jake and a couple others. I started with Grunt, but I’m liking Gulp more and more as I use it. It’s faster, more flexible, and just “feels” more intuitive and straightforward.
basedHonestly, though, which one you use isn’t important. All that’s important is that you use SOME KIND of build automator / task runner. Over the long run, it will save you hours of repetitive, frustrating, mindless drudgery. Sound good? Read on.
Installing Gulp
To use Gulp in your app, you must install it globally onto your system, as well as locally in your project directory. Use the following commands in terminal, while inside your project dir:
Open the gulpfile.js you just made in a text editor and put in the following:
var gulp = require('gulp');
gulp.task('default', function() {
console.log('If you can read this, gulp is working!');
});
Go back into your terminal, type gulp, and press Enter. You should see the following output:
[20:33:59] Using gulpfile ~/YourProjectFolderHere/gulpfile.js
[20:33:59] Starting 'default'...
If you can read this, gulp is working!
[20:33:59] Finished 'default' after 74 μs
So what happened here? We created a task called default in the gulpfile, which calls a function when the task is run. That function then performs a console log. Using the gulp command with nothing after it will run any task named default. It’s a good idea to always have a default task that does the important work of building and running your project. That way, anybody else who has to work with your project can just type gulp in the terminal and see it run without having to paw through your code.
Your First Useful Gulp Task
So that was fun, but not particularly worthwhile. Let’s do something useful with Gulp!
Let’s say you have a directory full of JavaScript files, a directory called ‘js’. All these files need to be copied over to a directory called ‘build’ before you can publish your app. No problem! Put this into your gulpfile:
var gulp = require('gulp');
gulp.task('copy-js', function () {
gulp.src(['js/**/*.js'])
.pipe(gulp.dest('build'));
});
gulp.task('default', ['copy-js']);
There’s a lot going on here, so I’ll explain bit-by-bit:
We created a new task called copy-js, which will do all our copying for us.
The first line inside that task, beginning with gulp.src, tells gulp where to look for the files we want to copy. That bunch of /s and *s we gave it is a pattern-matching string called a glob. Here’s how to interpret this glob:
The js/ part tells gulp to look inside the directory named ‘js’.
The **/ part tells gulp to look inside any subdirectories within the ‘js’ directory.
The *.js part tells gulp to find all files that end with the .js file extension.
On the next line, we chain a method onto the end of gulp.src… specifically, the .pipe() method. .pipe() takes the output of the previous method (i.e., the .js files we found) and lets us use it as input for another method, just like a unix pipe. This is extremely useful, as you’ll soon see.
.pipe() passes the files we found to gulp.dest(‘build’). gulp.dest() is used to save files to a particular location. Which location? Why, the one we told it: the ‘build’ directory.
Finally (and importantly!) we changed the default task. Instead of executing a function, default will now execute a list of sub-tasks. For now, we just want it to execute our copy-js task.
Now, if you type gulp into the terminal, any JavaScript files in the ‘js’ directory will be copied into the ‘build’ directory. Gulp will even create a ‘build’ directory for you if it can’t find one. How thoughtful!
Watch This
“This is all well and good,” you might be thinking, “but how does this actually save me time?” After all, you still have to keep typing gulp into the terminal every time you want this copy and paste to happen, right?
The answer is no, you don’t. Gulp can run tasks for you, automatically. Enter gulp.watch():
var gulp = require('gulp');
var jsDir = 'js/**/*.js';
gulp.task('copy-js', function () {
gulp.src([jsDir])
.pipe(gulp.dest('build'));
});
gulp.task('watch-js', function () {
gulp.watch(jsDir, ['copy-js'])
.on('change', function (event) {
console.log('File ' + event.path + ' was ' + event.type);
});
});
gulp.task('default', ['watch-js']);
Ok, so what happened here?
We made a new task called watch-js. When this task is executed, gulp.watch() will keep a close eye on the directory we tell it, watching for files inside to change. When they do, the tasks in the array we provide will be executed… in this case, the copy-js task.
To put it simply, whenever we change a .js file, it’ll be copied over automatically. How cool is that?
We chained .on() to the end of gulp.watch(). This lets us execute code when certain conditions are met. In this case, when a file changes, we execute a function. This function uses the event parameter to let us console log which file changed, and how it was changed (added, changed, deleted, etc.)
Also, we put the JavaScript directory glob into a separate var called jsDir, which we use in both the copy-js and watch-js task. That way, we can make sure it stays consistent.
Finally, we change the default task to execute watch-js when it’s called. By the way, you’ll notice this is an array; we can comma-separate multiple sub-task names to be called here, if we choose.
Sweet! What Else?
Gulp can help you automate all kinds of development-related tasks, including but not limited to:
Linting
Unit / Integration Testing
Bundling / Concatenation
Minifying / Compression
CSS pre-processor compilation (i.e. Sass / Less)
Image resizing / processing
Asset versioning
Running shell commands
To learn more, check out Gulp’s documentation and browse their extensive, searchable list of plugins. To use a plugin, npm install it, require it at the top of your gulpfile as a variable, and then use it based on the plugin’s documentation. Like the following example does with gulp-sass:
var gulp = require('gulp');
var sass = require('gulp-sass');
gulp.task('default', function() {
gulp.src('sass/*.scss')
.pipe(sass())
.pipe(gulp.dest('css'));
});
That should be enough to get you started. Happy gulping!
This tutorial is about a neat trick you can use with ng-repeat and inputs using AngularJS. This is just one tiny part of a larger AngularJS project of mine you can explore here: Chicken Breast Meals on GitHub.
Let’s say you are building a user input form that lets the user input series of items in a list, such as ingredients in a recipe. You could have the user click a link to add a new input field before typing in each ingredient, but that’s an extra (and annoying) step nowadays for users.
What you really want is a list of inputs that grows itself, offering a new blank input in response to each addition the user makes:
Infinitely-expanding list grows as the user adds to it
<h2>Ingredients</h2>
<ol class="ingredients-list">
<!-- loop through and display existing ingredients -->
<li data-ng-repeat="ingredient in formMeal.ingredients track by $index">
<textarea name="ingredientLines"
type="text"
data-ng-model="formMeal.ingredients[$index].name"
placeholder="Add ingredient"
data-ng-change="changeIngredient($index)">
</textarea>
<!-- trash can button -->
<a href="" data-ng-show="ingredient"
data-ng-click="formMeal.ingredients.splice($index,1)">
<img src="/assets/delete.png"/></a>
</li>
</ol>
When the user selects a recipe to edit in the admin page, that selected recipe is represented by an object called formMeal. Inside formMeal are properties like:
name (which is saved as a String)
yield (saved as a Number)
cookTime (another Number)
ingredients (an Array of Objects)
On the <li>
The ng-repeat directive builds the list of ingredients by creating a <li> and a <textarea> for each ingredient already found in the saved recipe data. Each ingredient has an index in the ingredients array, so we grab its name out of the array of ingredient objects like so:
formMeal.ingredients[$index].name
Immediately following the ng-repeat directive is $track by index. This bit of code is easy to overlook but it’s very important: it’s what keeps the user’s current textarea in focus while the user edits it. Without $track by index, the app kicks the user out of that text box after the first typed letter. (Ask me how much fun I had debugging this lose-focus problem…)
In the <textarea>
Each ingredient is represented by a <textarea>, and each one has its own ng-model directive pairing it with that particular index in the array.
data-ng-model="formMeal.ingredients[$index].name"
This lets us edit an existing ingredient anywhere in the list by that ingredient’s index. Since ingredients is an array, we need to pass it the index of the ingredient we’re editing via the <textarea>. (You can read more about ng-repeat and $index here in the Angular documentation.) This placeholder part is straightforward:
placeholder="Add ingredient"
This is what puts the default text into each <textarea> when the user hasn’t entered anything yet. It’s just a nice UX touch.
Finally, we have an ng-change directive. You can read more about ng-change here, basically all it does is call the method (or do the thing) you tell it to do any time there’s a change in the <textarea> it’s associated with.
data-ng-change="changeIngredient($index)"
A change to the <textarea> (ie: user typing) causes the method changeIngredient() to run with each change.
We already saw that whenever the user updates text inside one of those <textarea> regions, this method gets called. (If you were to put a console log inside changeIngredient(), you would see it called every time you typed a letter into the textarea.)
changeIngredient(index) checks the indexthat’s been passed in:
if that index is at the end of the array (ie: its index number is one less than the array’s length), then we are editing the last ingredient in the list and we need to push an empty ingredient (”) to the ingredients array to make the empty box appear at the end
if that index is not at the end of the array, we just update whatever’s at this index since it’s an ingredient that already exists. This is why you don’t see an empty box get added to the end of the list if you’re editing a field that’s not at the end.
It’s important to observe that this method works by checking that the user is editing the last index (which is always the empty <textarea>). This is how we don’t spawn new, empty textareas for editing earlier ingredients in the list.
When you initialize your data or your app, you’ll need to include something like:
$scope.formMeal.ingredients = [''];
or
$scope.ingredients.push('');
so that the ingredients list has an empty one in it by default. Your implementation needs will vary, of course, but hopefully this little guide gave you enough of a start to build this “infinity list” into your own AngularJS form!
Don’t miss the Plunker demo of a simplified version of this feature that you can play with and adapt to your own project.
The time had come at last to deploy Chicken Breast Meals to an external server so that it could be enjoyed by a larger audience. I chose Heroku because it’s a friendly “my-first-deployment” kind of technology that handles a lot of the nitty-gritty details that Amazon Web Services and others leave up to you. For a simple MEAN stack app deployment, Heroku has been sufficient for my needs so far.
However, despite Heroku’s fairly straightforwardness, I still encountered a number of problems along the way. This post is about all the steps I took to get my MEAN app from GitHub to Heroku.
For clarity’s sake, these are the technologies I used in the project and in my development environment:
And unlike Heroku’s tutorial, this tutorial assumes you already have a git repo on your hard drive and it’s already full of your project files.
Step 1: Open a Heroku Account and Add a New App to your Dashboard
Hopefully, Heroku’s site can walk you through this sufficiently well.
Once you have an account, add a new app via the dashboard. On the current version of the Heroku dashboard, adding a new app is done with the + button.
Heroku’s “add new app” button is easy to miss.
Step 2: Get the Heroku Toolbelt
Heroku’s own site will tell you to do this, too. Go to https://toolbelt.heroku.com/ and install the toolbelt appropriate to your environment. The toolbelt allows you to use the heroku command from your shell.
Step 3: Enter your credentials
Heroku’s toolbelt site walks you through these steps, too, but just in case you’re following along here:
$ heroku login
Enter your Heroku credentials.
Email: myaddress@gmail.com
Password (typing will be hidden)
Authentication successful.
You may get a response like this:
Your Heroku account does not have a public ssh key uploaded.
Could not find an existing public key at ~/.ssh/id_rsa.pub
Would you like to generate one? [Yn] Y
Generating new SSH public key.
Uploading SSH public key /home/jim/.ssh/id_rsa.pub... done
If this happens, choose Y and continue.
Since you already made a new Heroku app in step 1 you should skip the “heroku create” step.
Step 4: Add your Heroku app as a remote to your existing git clone’d repo
If you’re like me and you already have your git repo as a folder on your hard drive, you don’t need to make a new repo, you just need to add Heroku as a remote for it.
Navigate to your app’s root folder with cd and then use heroku git:remote -a yourappnamehere to add your remote.
If you follow these steps on Heroku’s own site, it will suggest using git init here (which you shouldn’t do since you already have a repo set up) and it will fill in your chosen app name where mine says chickenbreastmeals.
These are the steps I used to add my Heroku app as a remote to my existing GitHub repo:
$ cd /your/project/location
$ heroku git:remote -a chickenbreastmeals
Step 5: Attempt to push to Heroku – Permission Denied!
Pushing your repo to Heroku is done with just one line:
$ git push heroku master
…But if you’re like I was originally, you’ll get a permission denied (publickey) error.
(If you don’t get this error, hooray – you’re probably good to go. Or you’re stuck on a new problem that I didn’t encounter. Good luck.)
$ git push heroku master
Permission denied (publickey).
fatal: Could not read from remote repository.
Oh, snap. I Googled the “git push heroku master permission denied (publickey)” error and landed on this helpful Stack Overflow question. The first reply suggested a series of steps starting with heroku keys: add ~/.ssh/id_rsa.pub
heroku keys:add ~/.ssh/id_rsa.pub // or just heroku keys:add and it will prompt you to pick one of your keys
Alas, in my case, this didn’t work. Here’s what I got:
Uploading SSH public key c:/Users/Mandi/.ssh/id_rsa.pub... failed! Could not upload SSH public key: key file 'c:/Users/Mandi/.ssh/id_rsa.pub' does not exist
Well, that’s just super: I didn’t have an id_rsa.pub file yet. I needed to generate a new set of SSH keys, as detailed in my next step.
Step 6: Generate SSH keys
Fortunately, GitHub has an excellent guide on generating ssh keys, which will get you most of the way there. I encountered some problems along the way, which I’ve explained in this section.
The first step in GitHub’s instructions failed for me, of course, since I had no SSH keys.
All I got was:
ls -al ~/.ssh
total 7
drwxr-xr-x 1 Mandi Administ 0 Nov 10 16:04 .
drwxr-xr-x 48 Mandi Administ 12288 Nov 10 16:04 ..
-rw-r--r-- 1 Mandi Administ 405 Nov 10 16:04 known_hosts
If you also have no SSH keys (files with names like id_dsa.pub, id_ecdsa.pub, id_rsa.pub, etc) you’ll need to move right along to GitHub’s second step and generate a new SSH key:
ssh-keygen -t rsa -C "your_email@example.com"# Creates a new ssh key, using the provided email as a label# Generating public/private rsa key pair.# Enter file in which to save the key (/c/Users/you/.ssh/id_rsa): [Press enter]
Just press enter when it prompts for a file location – you want the default. You’ll enter a passphrase twice (remember what you type here!):
Enter passphrase (empty for no passphrase): [Type a passphrase]# Enter same passphrase again: [Type passphrase again]
And then you’ll get something like this, telling you where your identification and public key were saved as well as your key fingerprint and a random ascii art image for your viewing pleasure.
Your identification has been saved in /c/Users/you/.ssh/id_rsa.# Your public key has been saved in /c/Users/you/.ssh/id_rsa.pub.# The key fingerprint is:# 01:0f:f4:3b:ca:85:d6:17:a1:7d:f0:68:9d:f0:a2:db your_email@example.com
Caveat: I’m on Windows 7 64-bit using msysgit bash, so your experience may differ from mine. Responses to this answer suggest the problem is not unique to the Windows environment.
Anyway, now that the authentication agent is running I can properly complete the ssh-add step:
$ ssh-add ~/.ssh/id_rsa
Enter passphrase for /c/Users/Mandi/.ssh/id_rsa:
Identity added: /c/Users/Mandi/.ssh/id_rsa (/c/Users/Mandi/.ssh/id_rsa)
Phew! Onwards to the GitHub step.
Step 7: Add new key to GitHub account
Following GitHub guide to generating SSH keys still, the next step is to copy the contents of your id_rsa.pub file to your clipboard. This is easily done with clip, like so:
clip < ~/.ssh/id_rsa.pub
Go to GitHub and click the “Settings” gear icon in the upper right.
Click “Add SSH Key”
Give your key a title (I named mine after my computer)
Paste the contents of clipboard into the large field
The authenticity of host 'github.com (207.97.227.239)' can't be established.# RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48.# Are you sure you want to continue connecting (yes/no)?
Type “yes” and if everything goes okay, you’ll get:
Hi username! You've successfully authenticated, but GitHub does not provide shell access.
Oh, yeah – I just remembered what I was trying to do before I went down the SSH error rabbithole: I was trying to push my GitHub repo to Heroku!
First, add that same .ssh file to with heroku:keys add
$ heroku keys:add ~/.ssh/id_rsa.pub
Uploading SSH public key c:/Users/Mandi/.ssh/id_rsa.pub... done
Phew, success! Now I was able to run heroku push.
$ git push heroku master
Warning: Permanently added the RSA host key for IP address '50.19.85.132' to the
list of known hosts.
Initializing repository, done.
Counting objects: 801, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (705/705), done.
Writing objects: 100% (801/801), 994.30 KiB | 519.00 KiB/s, done.
Total 801 (delta 419), reused 0 (delta 0)
This message was followed by several screens depicting the installation of node and my project’s node packages. Heroku handles this setup automatically, and in my case, the installation processes went off without a hitch.
Step 10: Check out your app on Heroku – Application Error, hooray!
I’m glad I didn’t celebrate too early, because my Heroku app looks like this:
Application Error
An error occurred in the application and your page could not be served. Please try again in a few moments.
If you are the application owner, check your logs for details.
And no, it doesn’t go away in a few moments.
Step 11: Using MongoDB? Install MongoLab on your Heroku app
If you’ve ever tried to run your app locally while forgetting to fire up your MongoDB first, then you’ve probably seen your build process fail due to your database not being up and running.
There’s really no way to know that a not-running database is the cause of the application error screen, but I’ll spoil the surprise for you and tell you that in this case, that’s exactly what it was. If your Heroku-hosted MEAN app is using a MongoDB then you need to install an add-on called MongoLab.
Go to your app’s dashboard and click Get more addons…
If your Heroku-hosted MEAN stack app requires MongoDB, add MongoLab as a free add-on.
The addons page looks different every time I come in here, but the MongoLab icon hasn’t changed:
Click the icon to learn more about MongoLab, including its pricing structure and features. You will have to enter a credit card number to enable MongoLabs, but sandbox (which is what you’re using here) will be free. (I think this is super annoying, BTW. If it’s free, it shouldn’t require a credit card to use. I’ve never actually been charged by Heroku or MongoLab.)
To install, head back over to your Command Line/Terminal window and enter:
$ heroku addons:add mongolab
You’ll get this sort of response:
Adding mongolab on chickenbreastmeals... done, v4 (free)
Welcome to MongoLab. Your new subscription is being created and will be available shortly. Please consult the MongoLab Add-on Admin UI to check on its progress.
Use `heroku addons:docs mongolab` to view documentation.
IMPORTANT SIDE NOTE: My server.js file is already configured to expect MONGOLAB_URI. I’ve provided my server.js code here in case you need to do the same to your server file:
'use strict';
var express = require('express');
var bodyparser = require('body-parser');
var mongoose = require('mongoose');
var http = require('http');
var app = express();
mongoose.connect(process.env.MONGOLAB_URI || 'mongodb://localhost/meals-development');
app.use(express.static(__dirname + '/build'));
app.use(bodyparser.json({limit:'50mb'}));
app.use(bodyparser.urlencoded({limit: '50mb', extended: true}));
require('./routes/admin-routes')(app);
var server = http.createServer(app);
var port = process.env.PORT || 3000;
app.listen(port, function() {
console.log("Listening on " + port);
});
From here, I attempted to view my app again. This time I got:
Le sigh. But this is progress – I don’t get an Application Error anymore, so the database installation made a difference. Checking the Chrome console, my Heroku app is generating this error:
Failed to load resource: the server responded with a status of 404 (Not Found)
Step 12: Giving Heroku access to my Build folder
I scratched my head a bit over this “cannot GET/” problem and Googled it, which led me to this Stack Overflow question, Heroku Cannot Get.
Just like the original asker, my .gitignore contained a line for my build folder, which meant Heroku had nothing to serve as it had no access to my “compiled” project.
I removed the build line from .gitignore, and pushed the updated .gitignore file and build/ folder to both GitHub and Heroku like so:
$ git push origin master
$ git push heroku master
Step 13: IT’S ALIVE!
At last, I see my app when I visit chickenbreastmeals.com. It’s lacking the database entries from my local development environment, so I’ll update this post once I get those in.
Hope this guide helped you deploy your MongoDB / AngularJS / Express / node.js app to Heroku! There’s only about a thousand things that can go wrong between point A and point Z, so if something in this guide doesn’t work from you it’s probably a difference in our environments or an error on my part – please leave a comment letting me know (and start Googling – good luck!).
Addendum
Did you use Gulp to build your app and automate some build processes? If so, your app probably doesn’t look so hot on Heroku right now. This is because Heroku doesn’t know which of your Gulp tasks needs to run after all your Node packages are installed. Let’s fix that!
Dev Dependencies
First off, it’s important to mention that if you installed any packages as a dev dependency (like you probably did with Gulp), Heroku will not include them in your build by default. This is because Heroku assumes you’re deploying a production build, and will run npm install –production, which ignores dev dependencies. There’s two ways to fix this:
1. In your app’s package.json, move Gulp and all related packages from the “devDependencies” list into the “dependencies” list. This is a pain and I do not recommend it.
2. Run the following terminal command to tell Heroku that it should use the standard npm install command:
heroku config:set NPM_CONFIG_PRODUCTION=false
Postinstall Scripts
With that taken care of, we need to tell Heroku what commands we want to run after all of our packages are downloaded and installed. Luckily Heroku has made this easy! Just add a “scripts” block to your package.json file, like so:
The “start” script tells Heroku how to start my server: run node with the file server.js. The “postinstall” script is actually three commands separated by &&, ran in sequence: bower install, gulp build-libs, and gulp build. In my gulpfile.js, the build-libs task concatenates and minifies several libraries like Angular and Bootstrap. This task relies on those libraries being in the bower_components folder, which is why I run bower install first.
Troubleshooting
If any of the steps in this article don’t work, there’s a couple things you can try. The most helpful thing to know is that you can run common Linux shell commands on your Heroku container with heroku run. Like this:
heroku run ls -la
This is just like running ls -la on your own system, and will list all of the files in your Heroku deployment’s main directory. This is how I figured out that I need to run bower install: there was no bower_components folder in my deployment!
On my Windows machine, I have to navigate to Mongo’s installation folder to start mongod. Open Terminal (Mac) or Command Prompt (Windows). Navigate all the way into the bin folder. On my Windows machine, my mongo folder is here:
J:\mongo\mongodb\bin
Now use:
mongod
On Windows, I see a connection spam scroll by. Leave this window open and go to the next step.
Problems starting Mongodb?
If you get the “Unable to lock file: data/db/mongod.lock. Is a mongod instance already running?” problem, you probably have multiple instances of mongodb already running. This can happen as you switch projects, switch between user accounts on the same machine, etc.
To fix it, do this to list your computer’s processes and filter them to just mongo (this example is from when I had the problem on my Mac):
ps aux | grep mongo
On my machine, running that command revealed a couple instances of mongo already running (these were started by Jim using a separate account on the same computer). The third process in the list (the one owned by mjgrant) is the grep itself.
Because my mongo instance was started by “root” (another Mac account, really), I had to be all dirty and use sudo to kill it by its process number (second column from the left).
sudo kill 61180
If you run the ps aux command again, you should see that there are now no instances of mongo running. If there are, just kill them using the same steps.
But what’s this? Trying to start mongo gives me this error now:
2015-04-26T11:30:11.114-0700 [initandlisten] couldn't open /data/db/memry_database.ns errno:13 Permission denied
2015-04-26T11:30:11.114-0700 [initandlisten] error couldn't open file /data/db/memry_database.ns terminating
2015-04-26T11:30:11.114-0700 [initandlisten] dbexit:
Rather annoyingly in our shared-computer situation, mongo’s knowledge of databases transcends user accounts. Navigating up to /data/db I can see all the databases on this computer. cbm_database is the one I’m trying to use, but mongo is choking on trying to access Jim’s memry_database.
I check their permissions…
ls -la
When asked why his databases belong to “root”, Jim says, “I probably did it wrong” :D Alas, we don’t know how we ended up with databases belonging to “root”, but Jim must have been using mongo as a root user, hence why he didn’t run into problems accessing databases owned by mjgrant.
Anyway… I used chown to assign ownership of these rogue root databases to my own account to unblock my work. (Standard disclaimer applies: use sudo with caution.)
I run ls -la again and confirm that now I own all of the databases.
Now you should be able to start MongoDB with…
mongod
And now you should see the connection data:
3. Start the Mongo Shell
Open a new window (and navigate again to the bin folder if you’re on Windows).
mongo
This line starts up the Mongo shell.
(So to recap, mongod has to happen before mongo.)
On Mac:
On Windows:
MongoDB shell version: x.x.x
connecting to: test
You can now start your localhost server. (If you were blocked by Error: failed to connect to [localhost:27017] that should now be resolved.)
From here on out, commands you type into the command line will be mongo-specific.
3. Viewing your MongoDBs
Let’s say you want to see your databases:
show dbs
show dbs delivers a list of your databases. You should see something like this in your terminal window after you type it:
On mine, the result is:
> show dbs admin <empty> local 0.078GB meals-development 0.078GB
4. Using your Mongo DBs
These are your database names. Go inside them with “use”:
use meals-development
Once you’re “using” a database, though, the terminal doesn’t give much clue as to what to do next.
5. Viewing Collections
A collection is a group of MongoDB documents. Generally, they’re similar in purpose, but they don’t have to conform to one shared schema. You can see collections inside a db by typing:
show collections
As an example, inside my recipes-development example I have:
meals
system.indexes
Ah hah, finally. Now I know the name of the collection that contains my recipe (meal) data.
6. Look inside a collection
We’re almost to the good part. To see inside the meals collection, type:
db.meals.find()
You should get a number of objects with ids, names, etc. Each object will start with something like: { “_id” : ObjectID<“544dabfba054…
That’s it!
This was just a short guide to my most commonly used MongoDB shell commands. When I’m setting up a new db, I use these steps to look inside my db and see if data is being saved the way I expect it to.
Forking one of your own Github repositories ought to be easy, right? After all, forking somebody else’s repo is as simple as clicking a single button! Surely you can just press that same button on your own repo?
NOPE!
If you press the Fork button on your own repo, the page will refresh and… that’s it. No error message, no suggested course of action, nothing. It turns out forking your own repo on Github is impossible, but don’t worry: following the steps below will get you the next best thing.
1. Create a New Repo On Github
First, go to github.com and create a new repository. This will contain our fork when we’re done. I’ll refer to this repo as “fork-repo”, with the original being “orig-repo”.
Make sure you don’t check the box for “Initialize this repository with a README”. You’ll see why in Step 4!
2. Clone the New Repo Locally
Next, make a local copy of the blank repo we just made. In Terminal, cd to the base directory you want to keep the fork in and then type the following:
We’ll now add an upstream remote pointing at the original repo. This will allow us to pull files from the original repo, both now and in the future if we wish. Make sure you navigate to the directory you cloned the fork repo into first!
Now we can pull all the files from our original repo into the fork, like so:
git pull upstream master
Your fork directory should now be identical to your original repo!
Note that if you made a README.md for the new repo (or added any other file) you may have some merge conflicts to resolve before you can go to the next step. Make the necessary changes to your files and commit to resolve the conflict.
5. Push!
You’re done! Well, locally at least. All that’s left is to push your new fork repo back up to Github:
This summer I did something completely crazy awesome: I quit my game industry design job to attend one of those trendy “coding bootcamps”, specifically Code Fellows in Seattle and start a new career in web development! After 8 weeks of intense work, I completed the Full Stack JavaScript Development Accelerator on September 26th, 2014.
There aren’t a ton of reviews on coding bootcamps, so I thought I’d add my own to the mighty Interwebs and try to answer some of the questions that I had going into it.
TL;DR
It was awesome! I learned a TON and my fellow students were brilliant and enthusiastic. Highly recommend, would do again.
Instruction: My class was taught by Ivan Storck and Tyler Morgan. They were both excellent instructors: good at explaining things, patient with questions, up to date on trends and technologies.
Pacing: Intense! They introduced 5-10 new things a day. Homework filled every minute of my bus ride home and evening.
Job placement:I have not accepted a full-time position yet (2 weeks post graduation) but I have been in contact with a number of hiring managers who got my info directly from Code Fellows. I accepted a full-time software engineer position at Expedia, doing challenging and rewarding work with a great team of programmers!
The accelerator is not for beginners, so start writing code now if you’re interested.
What I did to Prepare for Code Fellows
Like most Code Fellows applicants, I was changing careers. Unlike many students, though, my past life was in software development. Even though I never wrote code for my design jobs, I was immersed in the lingo and processes of making a software product.
I’ll talk more about what I did, specifically, to cram for the class in a later section of this review.
Why I Didn’t Pursue a Bachelor’s of Computer Science Instead: Boot Camps vs. Degrees
Like a lot of boot camp attendees, I considered going for a Computer Science degree instead. I live near good schools and I’m sure I could have succeeded at any of them. Boot camps are too new to really know if they’re going to replace the traditional CS degree path into programming. And tech directors and hiring managers are right to worry about boot camp grads: apparently, some coding boot camps really suck.
But in my case, all that was really separating me from the front-end developer job I was aiming for was a better understanding of some very specific web technologies. After finishing the program, I can say that I definitely made the right choice for me.
I can’t say if boot camp is the right choice for high school grads hoping to skip the cost and time investment of a CS degree. I think that comes down to what you want to do, and how much you value a degree. There’s a lot of entrenched thinking (right or wrong) that having a degree is an indication of a job applicant’s worthiness, and until that changes, a degree will continue to open certain doors. Plus, a hard science education is virtually essential to get hired making airplane software, medical devices, and other things where the price of failure is very high.
Web development and game development seem to play more loose and free with degree requirements, and some of my brightest and bestest co-workers did not have degrees, so I’m not personally convinced of the necessity of a degree in order to write code for a living. Being great at what you do and being likable should guarantee you never run out of opportunities.
As a point of interest: almost everyone in my class of 18 students had at least one college degree, and the majority had workplace experience. A handful (maybe 4 of us) had software development experience in some capacity.
What I Knew Going In: My Coding Background Prior to the Boot Camp
I already knew a bit of ActionScript (learned it in college and used it at my first job to script menus), HTML, CSS, and WordPress (picked these up from my blogging hobby), lua (from when I tried my hand at making little indie games in Corona SDK), Java (from a free online Stanford course) but I wouldn’t have called myself a programmer. At best, I was a hacker who modified existing things to make what I needed.
Still, there were a number of things I did in the year leading into my Code Fellows class that I think helped me do well:
Started and maintained several WordPress blogs – this taught me about web hosting, deploying to a web host, and a lot of web dev processes and lingo
Customized several WordPress themes – basically a crash course in CSS and PHP (I didn’t use PHP in the Accelerator, but there’s no “bad learning”, so to speak)
Completed Stanford’s CS106A Programming Methodologies – this taught me basic Java and object-oriented practices. The course videos are free on YouTube and the course materials are available on Stanford’s site. I cannot recommend this course enough. It gave me a stronger background in Computer Science than many of my fellow Code Fellows students had the benefit of, including an understanding of object oriented design, primitive data types, data structures, memory management, pointers, compiling, and more. It was worth it for the object oriented instruction alone.
Finished the JavaScript Road Trip on CodeSchool – A one month CodeSchool subscription was easily the best $30 bucks I’ve ever spent towards my coding education. CodeSchool is great at showing you all the stuff a language can do, but not so good at showing you how to use it outside of the CodeSchool vacuum, so use it as a way to get introduced before moving on to building stuff on your own.
Knew quite a bit of Git – I learned Git as a part of my last job, which was great because Git can took me a while to wrap my head around, even though I had worked with version control before. Git proficiency freed up a lot of brain space for harder coding problems during the class.
Opened a GitHub account – All the class homework and class projects are hosted on GitHub. Already knowing how to create repos, clone repos, fork repos, create branches, resolve merge conflicts, merge branches, and collaborate with others on GitHub was big advantage for me.
Read the first several chapters of Code Complete – This “best practices” book is all about how to approach coding problems and how to structure the code you write. How long should a method be? What are good naming practices? Stuff like that. It’s approachable even to novices and I owe a lot of my good habits to it.
Followed this AngularJS tutorialand then built my own separate site using what I had learned – It took me a little while to really grasp MVC and the role of a framework like AngularJS, so spending a couple weeks on this before the 4 days it was covered in class was very helpful.
The class moved very fast and there wasn’t really time to get mired on a new technology or technique, so I tried to give myself a good grasp on the basics before the class.
Not that Code Fellows didn’t have its own preparation courses, which I’ve outlined in the next section.
Foundations I and Foundations II Review
Foundations I and Foundations II are Code Fellows’s pre-Accelerator preparation classes. When I took them, they were $500 and $1350 respectively (when paid for up front). Foundations I was pretty general, though we did write in JavaScript. Foundations II varies by stack – if you’re pursuing Python you’ll do a Python FII, iOS students go to an iOS FII, and so on. I took the JavaScript flavor of FII.
Schedule
Both Foundations classes meet two weeknights a week for four weeks each, 7-9pm. The classes are open to everyone; you don’t have to be committed to an Accelerator to take a Foundations class.
Both classes featured:
Live instructor-led demos on a big projector screen
Work time in class with access to TA’s
Challenging homework assignments to supplement instruction
A customized “what to work on next” guide for each student, given at the end of the class
Location
The Foundations I class was held at the University of Washington’s Seattle campus. Parking was usually $5 a night, but it was occasionally free due to unexplained reasons. The campus was beautiful, well-lit, and easy to navigate. The class was held in the biggest classroom I’ve ever been in, with about 150 students in attendance.
Here’s a pic I snapped 15 mins before class one night:
Foundations II was held at Code Fellows’s campus in South Lake Union at 511 Boren Ave. You can park in the Amazon garage around the corner (heading west on Republican, drive past Boren and then turn right into the garage). After hours parking has an unadvertised rate of $2.44 and it’s a ghost town by 7pm.
This parking garage doesn’t show up on Google maps. It’s a hidden gem.
Heads up to any Eastsiders – I live near the Kirkland/Bothell border and my drive into the city for the night classes was hell. I took either via 520 or I-5, whichever one Google Maps predicted a shorter drive for at the time of my departure. The shortest I ever made the trip was an hour, and the longest was 90 minutes. It was tedious and miserable, and there were no better public transit options. Inbound traffic is super jammed up between 5-7pm. Just something to take into account if you live or work on the Eastside and have dreams of getting to these classes in a reasonable amount of time.
Foundations Curriculum
I came into the Foundations classes with familiarity with most of the material they introduced, but I appreciated the “legitimization” of my self-taught knowledge. Plus, the homework was good practice.
Foundations I included:
A brief history of Computer Science
Using GitHub – forking repos, pushing to repos
Writing simple JavaScript loops
Simple CSS styling
Creating a simple card game, first without and later with a visual (in browser) component
Foundations II included:
Creating a menu using jQuery
Lodash and Underscore
An introduction to Big O
More Git practice
I wouldn’t recommend relying 100% on the Foundations classes for Accelerator prep. You won’t get enough coding practice unless you do as much as you can on the side in addition to the classwork.
Dev Accelerator: 8 weeks of kicking my ass with code
The pace was brutal, the homework was never-ending, and the whole 8 weeks flew by so fast I was shocked when it ended so “soon.”
A typical day introduced anywhere from 5-10 new technologies to read up on, learn, install, and use in the homework. (A lot of these were node packages.) The class squeezed every last drop out of me, which I loved – I feel like I got my money’s worth!
Ivan Storck and Tyler Morgan were fantastic teachers. Just absolute geniuses at this stuff and I miss them now that the class is over! They were both very approachable, knowledgeable, and good at assisting in ways that helped without just giving away the answer.
Technologies Covered
This will probably be hilariously out of date in 6 months, but here’s a list to give you an idea of just some of the material we covered:
node.js
Workflow and build tools – Grunt, Yeoman, Sass, JSHint, Browserify
npm packages galore
mongoDB – CRUD operations
authentication / authorization
Unit testing in Mocha
Karma, Jasmine testing
deploying projects to Heroku and Amazon Web Services
Google Maps API
Algorithms
Backbone and AngularJS
Data structures – linked lists, queues, stacks, arrays, using objects as hash maps
Whiteboard questions like the kind you might encounter in an interview
And by “covered”, I don’t mean “they mentioned this in a demo once”. I mean actually wrote code that used these technologies, usually as homework and/or for team projects.
Daily Schedule
I actually had no idea what to expect for the class schedule other than “every day from 9-4”, so I’ll share what mine was like here.
Class Days
Class was held every week day from 9am to 4pm for 8 weeks total. Mondays-Thursdays followed a schedule of co-working time in the morning and instruction in the afternoon. Fridays were “How to Get a Job” workshops and did not include any coding instruction (more on those later).
For my class, the 9am-1pm part of the day was “co-working” time. This means everyone in the class is expected to be present, but the time is for homework and asking questions. The instructors are available during this time for help.
The co-working space was big enough for several classes worth of students to hang out in there at once:
People typically left for lunch around 11:45-12:00 and everyone returned by 1pm. Food in South Lake Union is pricey, and I brought my lunch almost every day – Code Fellows has refrigerators and microwaves on site.
The 1pm-4pm part of the day was instructional time, where everyone sat at desks facing the projector and followed along on laptops to the live demos and lectures that were given by the class’s two instructors. They instructors filled every minute until 4pm with useful stuff, which was great – I hate when class I’m paying a fortune for ends early. ;)
Here’s my class near the end of an afternoon lecture session:
Some classes flip it and do instruction in the morning and co-working in the afternoon. On days when my bus ran late, though, I really appreciated missing 15 minutes of work time instead of 15 minutes of expensive instructional time. At $9,000 for the course (assuming you pay in full up front for the discount), every hour of instruction costs $93.75!
Team Project Weeks
Team Project weeks were weeks 4 and 8. Groups were self-selecting and typically 4-5 students each. The whole week was given as co-working time for teams to work together on projects that were then presented to the class as a whole on Friday. Instructors were available all day every day for help.
Team project experiences vary by team and individual. My two team projects went pretty well overall, and I learned a lot about Google Maps API and JavaScript (first team project) and Angular (second team project) by the time the week was done.
Like everything else in the class, the team projects are hosted on GitHub!
Find Fit – Week 4 (Google Maps and Places APIs, JavaScript)
Instead of class, each Friday was a “How to get a job” lecture or workshop session. There were six of these total, usually an hour or two in length. A couple of the sessions were pretty elementary stuff, but I’ve worked in software for nearly a decade and have introduced myself more times, written more resumes, and interviewed more people than I can remember by now. These sessions might be more exciting to someone without that experience.
But there were some real gems in the Friday sessions, too!
Gina Luna, a Code Fellows staffer and photographer, has a great eye for portraiture. She took photos of each student and we all got a bunch of professional-quality photos of ourselves to use on our GitHub and LinkedIn profiles.
Code Fellows also invited two alumni to speak to the class in one of the Friday sessions. Those guys were awesome, and I had fun picking their brains about what it’s like to work at their respective web dev companies.
I also loved the presentation given by Mitch Robertson – he showed us a lot of LinkedIn tricks and emphasized the importance of being likable (in my own experience with hiring people, being likable is often more important than raw skills).
The Hardest Stuff
All things considered, the most challenging aspects of the class were keeping current on the homework (more every day!) and the week where we did whiteboard challenges for a couple hours every afternoon in small groups. These problems were hard, but they became more manageable the more of them we did, and now I feel pretty well prepared to write code on a whiteboard in an interview.
How do I Job Now?
Code Fellows doesn’t place students in jobs, but they do have a bunch of “hiring partners” – companies that basically get first dibs on the Code Fellows resume stack.
I’ve already heard back from several of these companies, and they’re prestigious, exciting companies located in downtown Seattle, not Bumblefracktucky. So that’s awesome. (However, I’m only pursuing Eastside opportunities at this time.)
Networking
At Code Fellows, assuming you’re not a huge jerk, you’ll get to know a lot of great people who will be the start of your new professional network. I’ve personally never gotten a job where I didn’t already know someone on the inside vouching for me, so I think this “instant network” was just another one of the class’s benefits.
The Accelerator instructors also encouraged us to attend local Meetups, which I did, and I left with another half dozen really good contacts to keep in touch with.
In short, I met a lot of great people as a result of the class.
My Advice to Interested Students
Start programming now. This isn’t for beginners. The more you know going in, the more you’ll get out of it. You don’t want to be the person stuck on Git basics when everyone else is doing cool stuff.
Get some practice with algorithms, data structures, and Big-O. These topics really deserve more time than they get in the class, and they’ll probably hit like a bag of bricks the first time you encounter them.
If you’re a procrastinator or are just doing it for the paycheck, you probably won’t survive. This class was brutally difficult and frustrating at times, and the only way to survive it is out of a genuine interest and passion for computing.
Also, You’ll need a laptop with a UNIX based operating system. That means a MacBook or a machine with Linux installed. I started this adventure with a Linux machine but purchased a 15.4″ MacBook Pro refurb right before the accelerator. It was a good choice.
And that’s it!
Overall, a fantastic experience and I’m glad I went for it. I think it would have taken me at least a year to learn all this stuff on my own, and it really helped to have knowledgeable teachers pointing me at the right technologies, answering questions, and challenging me with things I might not have discovered (or given up on) if I had been relying entirely on self-teaching.
If you’re considering a coding boot camp and you live in the Seattle or Portland area, you should check out the next Code Fellows open house.
Today I learned… how to save my GitHub username and password so I don’t have to re-enter them every time I push something to GitHub from my Windows machine.
I’m hesitant to disobey the word of GitHub, so instead of relying on SSH, I followed GitHub’s instructions to use a credentials helper.
Credentials Helper Setup
However… GitHub’s explanation of how to cache your password with the credentials helper aren’t very clear. They tell you to enter this line and then don’t tell you what to do next.
Here’s what I did – worked for me.
In your Git Bash window, enter this line:
$ git config --global credential.helper wincred
Now push a change to Github and enter your credentials – this is where your username and password information gets saved to the credential helper.
You won’t get any feedback telling you that, but you can confirm it worked by pushing another change. This time, you shouldn’t have to enter your credentials again.
DailyBlogTips’s category view shows just the post titles in an unordered list.
By default, WordPress (and Genesis) gives you two options for displaying posts by category: either their full form one after another (do not want), or a truncated version with the post title and excerpt (also do not want).
A solution in which the titles were rendered as <a hrefs> between <li></li> tags required modifying the loop when on a Category page.
Please note that this code requires a theme built on the Genesis framework, though it should not be hard to modify the hooks to suit your framework if you know your way around a bit.
Copy the contents of functions.php into your child theme’s functions.php file.
Here’s what this is doing, in English:
When the current page is ‘Categories’ (Line 9), don’t do the usual genesis_loop (Line 13). Instead, do this custom loop (Line 14) called mjg_custom_loop.
Over in mjg_custom_loop, create a new unordered list (Line 21) and for each post (Line 22), echo its permalink and its name into a set of <li> </li> tags (Line 24).
The contents of style.css will probably need to be customized to fit your site’s design, but hopefully this is enough to get you through the hardest part which is customizing the loop on the category page.
To be an effective full-stack Javascript developer, you pretty much have to use npm (Node Packaged Modules).
npm is the official package manager for Node.js. Using it allows you to easily install packages such as underscore, express, grunt, gulp, socket.io from the command line. (read more about npm on Wikipedia and npmjs.org).
Many of these packages need to be installed globally to be used, like so:
$ npm install -g grunt-cli
The -g denotes a global install.
But, in some environments, attempting to install globally without including sudo might give you this sort of error message:
npm ERR! Error: EACCES, open '/Users/YourNameHere/.npm/-/all/.cache.json'
npm ERR! { [Error: EACCES, open '/Users/YourNameHere/.npm/-/all/.cache.json']
npm ERR! errno: 3,
npm ERR! code: 'EACCES',
npm ERR! path: '/Users/YourNameHere/.npm/-/all/.cache.json' }
npm ERR!
npm ERR! Please try running this command again as root/Administrator.
Should you just do what the error message says and run this command as root using sudo? The short answer is: NO! Packages can run their own scripts, which makes installing them with sudo about as safe as shaving with a blowtorch.
There are many ways to solve this problem, but the easiest I’ve found is to simply change what folder npm installs packages into. Just use this terminal command:
npm config set prefix ~/npm
This tells npm to install packages into your home directory in an “npm” subfolder. It will automatically create this new subfolder the first time you globally install a package.
But we’re not done yet! You also need to modify your $PATH to include this new folder. To do this, open the config file of your shell in your favorite text editor. This is most likely “.bashrc” in your home directory, or “.zshrc” if you use Z shell. Once you have the file open, search for a line that looks like this:
The $HOME part of that just tells the shell to look in your home directory for a folder called “npm”, and then a folder inside that called “bin”. Once you’ve done these two things, you should now be able to install packages globally in npm without using sudo. Enjoy the tremendous feeling of freedom this brings.