According to a Content Marketing Institute report, 86% of B2B companies use content marketing, but only 28% say that their efforts are effective. Nonetheless, there is little doubt about the effectiveness of content marketing as a strategy that drives targeted traffic and generates high-quality leads. This implies that something along the execution can be optimized to achieve content marketing’s full potential.
I see many companies who put an effort to steadily produce well researched and comprehensive content that has all the ingredients to engage an audience. In the same time, those very companies often neglect to fine-tune the factors that are just as important as the content's copy: Minor optimizations in keywords, timing, distribution channels and titles often make the difference between a piece of content making an impact or passing unnoticed.
One way to find the factors that make your content shine is by split testing. However, content has so many variables that it is too complex to apply any time or cost effective testing. Another way is to use analytics, which are of course necessary, but provide a narrow understanding of success factors as they allow you to analyze only your own content, without a wider perspective. A better way is to learn from existing content on the web. This is where content data mining becomes useful: In sufficient amounts, data can pinpoint what makes an optimal piece of content for your specific audience.
Over the past decade or so, any type of data became widely accessible, but remains mostly within the scope of responsibilities of the mysterious “data-scientist”. Now, what if I told you that as a content marketer you can get data-driven insights without writing a single line of code? Imagine data-backed answers to confirm or disprove your content assumptions or point you in a direction that you hadn’t considered before. Either way, data-driven content marketing is likely to boost your content reach, traffic to landing pages, generate better leads and ultimately improve ROI.
A simple and effective way to get plenty of data is by accessing open APIs. I will describe how to use a simple toolset to get data from inbound.org’s API which is a good place to start due to it’s simplicity and large amounts of high-quality, community curated content.
Inbound.org provides a simple API with a single endpoint. An endpoint is a URL that returns data instead of returning a web page. The endpoint we need can be accessed here: http://inbound.org/api/articles.
In a call to an API, you are sending a URL that specifies what data you would like Inbound.org's server to return. Any call you make to the API has to start with the endpoint URL and if you click on the link above or print the URL into your web browser, you will get back a wall of data. Luckily, you can customize the call to the API to control and refine the returned data by appending parameters after the endpoint URL.
Inbound.org’s API offers 7 parameters to control the content that is being returned, those are: type, sort, query, tags, location, offset and limit. Each one of the parameters can be assigned a value and appended to the URL. In the image below are the descriptions of each parameter which can be found on http://inbound.org/api:
Before refining the API call by appending parameters, we need to make sure to use a question mark (?) after the endpoint URL. The URL will look like this: "http://inbound.org/api/articles?" and we can start appending parameters after the question mark.
The content type I would like to analyze in this example is “articles”. I will set the type parameter to be “article” by using an equality sign (=) as an assignment operand, so the added parameter will have the form: “type=article”.
I would also like to only search articles that are about content marketing. For this, I will need to assign a string to the query parameter. Although intuitively it should look like this: “query=content marketing”, we have to consider URL encoding. If you are using a modern browser, you don’t need to do anything since it will automatically adjust the space to the correct URL format and replace it with "%20". If circumstances require you to manually encode the space character to the applicable URL format, here is a straightforward tool that will allow you to encode any string of text: http://meyerweb.com/eric/tools/dencoder/. Just type in a text and click “Encode”, then copy the new string which will look like so: “content%20marketing”.
The final parameter I would like to add is the limit which defines the number of returned items (articles). Because we want to see correlations, it is in our interest to get as many items as possible so I will set the limit to the maximum allowed by inbound.org's API which is a hundred: “limit=100”.
The URL components that we have so far are the endpoint URL with the trailing question mark (http://inbound.org/api/articles?) and the 3 parameters that we just defined ( “type=article”, “query=content%20marketing” and “limit=100”). Parameters are appended by placing an ampersand (&) between them. The order of parameters doesn’t matter. After connecting everything the complete API call should look like this: http://inbound.org/api/articles?sort=hot&query=content marketing&limit=100.
All you have to do now is copy and paste this new URL into your web browser. Hopefully this should return a wall of cryptic text that looks similar to the following image.
Like most modern APIs, Inbound.org returns a JSON object which is a way to format data to easily interchange it. You can learn more about JSON from the official site: http://json.org.
To make the data quickly useable we need to convert JSON to CSV format. The easiest way I found to do that online is with this tool: http://konklone.io/json/. Select all (ctrl-a) the JSON-formatted data that was returned with your API call, then copy (ctrl-c) and paste (ctrl-v) it into the upper box in the tool. You will see a table generated in the bottom box. Click the link above the table to "download the entire CSV".
Now that we have a CSV file, we can open it and save it either as an excel sheet or upload to Google drive where it will be converted to a Google spreadsheet. Unless you have vast amounts of data, Google spreadsheets should be enough. Either way, you now have content marketing related articles with interesting information about each article such as the URL, title, description, upvotes, comments count, clicks, views, name of the submitting person, date and time.
Now it’s up to you to get creative with the data and to ask the questions you would like to have answered. Perhaps you would like to find the relation between upvotes and comments on a post (e.g. how many upvotes are required to get a single comment), or the best times to post about a certain topic, or to check what are the attributes that engaging content peices have in common.
If possible, I recommend visualizing your data in charts to make it easier to analyze and notice trends. I find bar charts capable of representing most of the insightful data for content marketing purposes. By now, you should be on your way to gaining interesting insights and improving your content marketing efforts.
Example: engagement per post by day of week and time (UTC +2).
Now that you understand the basics of how APIs work, you can do the same on any API that doesn’t require authentication. This can be useful beyond content, and help to define and find targeted audiences, generate leads and automate processes among many other marketing goals. I hope you find this guide to APIs useful and stay tuned for more data-driven marketing articles.