In my previous article, I covered creating geospatial SQL Server tables using the freely available Natural Earth resources. Natural Earth is an extensive public domain map dataset available at 1:10m, 1:50m, and 1:110 million scales in vector and raster data formats which can be used as an alternative to ESRI Shapefiles for geospatial data. In this article, we’ll be creating a simple SQL Server Reporting Services report which utilizes this spatial data in lieu of the more commonly used shapefiles to plot data on a custom map.
To get started, we’ll first need to create some assets. Follow the steps in the previous article to setup the Natural Earth tables. Next, we’ll create some views to simplify queries, then we’ll create a report using these assets as well as some sample data.
SQL Server Reporting Services (SSRS) has excellent geospatial support for displaying data on a map. Maps are typically created using ESRI Shapefiles (.shp files). These Shapefiles are typically created with complex GIS software and made available for download (sometimes free and sometimes not) to be used. Additionally, SSRS has an excellent default set of Shapefiles built in for the US which can show the country, states, and individual counties.
Example of SSRS Shape File showing Georgia and its 159 counties.
But what about when you need more flexibility in your geographic display? Some examples of this may be wanting to display something that you can’t find a shape file for (maybe all the states and provinces in North America) or maybe you want to dynamically draw the geography based on some property of the dataset. Geospatial data queries to the rescue! Using SQL Server’s native geospatial support, a geospatial query can be created to return something as simple as a point or rectangle, or complex as the geography of an entire country and all of its rivers.
Getting all of the latitude and longitude coordinates to create a useful geospatial query could potentially be an enormous amount of work. Fortunately, that work has already been done in a freely available resource, thanks to Natural Earth and Laurent Dupuis. SQL Server 2012 or greater is recommended for this process.
Example of a geospatial query, shown in the SSMS results pane, based on the imported Natural Earth data.
This past September, I attempted (and recently received a passing score for) Microsoft’s new certification exam, 70-768, “Developing SQL Data Models” during its Beta period. This brand new exam is a requirement toward the new MCSA SQL 2016: Business Intelligence Development certification. Last weekend, I took and passed 70-466, “Implementing Data Models and Reports with Microsoft SQL Server” which counts toward MCSE: Data Management and Analytics. Both of these exams overlap heavily in the topics covered, so if you’re interested in taking them both, its a good idea to study for both and take them back to back.
Brand new exams, as well as higher level specialty exams in general can be a bit of a tricky beast due to the lack of available resources. For exams which are mainstream (such as Windows Server exams) and have been out for a while, you can count on resources such as MS Press Books targeted toward the specific exam, practice tests from MeasureUp and Transcender, and other useful resources. Unfortunately, for brand new exams, or many of the high level SQL Server exams, none of this exists. Having successfully studied for and passed both 70-768 and 70-466. Below are some tips and resources I used to prepare for each exam.
General Tips for Both Exams
The first tip is just to know the question types that the exams typically cover. Microsoft list all of their question types here, with examples of each. It’s typically a good idea to pay special attention to the “Build List” type of question, which emphasizes knowing the steps, and order, that task components should be performed in. There seems to be a lot of love for this question type.
For both exams, the best starting place is the bulleted section of the “Skills Measured” section of the official exam page. Modern Microsoft exams follow this section very closely — you can practically guarantee there will be a question that ties back to each sub-item for each category. I like to go through this section and all of the bulletpoints and break it down into small words or phrases that I can then use as a checklist while studying. I’ve included an example for each exam below in the exam specific sections.
Saturday, December 10th marked the 2nd annual SQL Saturday Atlanta BI Edition. Atlanta is known for its massive SQL Saturday held every spring / summer, so I’m happy to see the smaller, more BI-focused winter event continuing on. With such a large number of SQL Server professionals in the area, there is definitely room for multiple events.
As with last year’s event, this one was a well-run event with no flaws that I was aware of. This year seemed to be a bit of a “back to basics” theme. Many of the extras that are frequently seen at SQL Saturday events — lots of sponsors, attendee bags and printed materials, speaker shirts, paper session evaluations, and other extras weren’t present. Instead, the focus was purely on providing a full day of content across multiple tracks, and you know what, that’s just fine. (Many) free donuts were provided for breakfast and boxed lunches were purchased, and everything was adequate. The core idea behind SQL Saturday is free training and networking, and the event delivered! I particularly thought the session lineup for this event was a great mix of topics.
Recently, I had the opportunity to participate in the beta period for the new Microsoft 70-768, “Developing SQL Data Models” exam. As part of the development process for new exams, Microsoft periodically offers invitations to take a beta version of the exam free of charge. Passing an exam while in the beta period results in passing the test officially when it is released. However, there are a few catches to this. The first of which is that these betas are designed to be taken by people who already have knowledge of a subject and work with it on a regular basis. Since the exam is brand new, there are no official study materials to study, and there is typically a short window of time between the announcement and close of the beta period, so there isn’t a lot of time to prepare. These exams also aren’t for those that feel the need for instant gratification, as it may take a month or longer to receive score results to ultimately find out if you passed or failed. But, the wait is for a good cause as the beta exams are used to determine the final versions of the exam as well as passing scores, which must be completed before the beta exams can the go through the scoring process to be officially scored.
For those interested in taking a beta exam and qualified to do so, I’ve found that Microsoft usually announces the availability of a limited number of seats via the Born to Learn blog which are available until the allocated number of exams are scheduled. For this last batch of SQL Server beta exams, 300 seats were made available for each exam. This may sound like a lot, but this is worldwide, so you have to act fast. I’ve found that if you don’t act within the first day or maybe two of the announcement, you will probably be too late. Once announced, you typically have to have the exam scheduled in a proctored environment within around a month of the announcement.
Like all Microsoft certification exams, the beta exams are covered by an NDA protecting the exam content, so I won’t go into exam specifics, but I do have a few general thoughts about the exam and process. It is interesting to see how the Microsoft exams in general have changed and matured over the years.
- I’d classify this exam as “tough, but fair” with a heavy amount of reading. I underestimated the amount of reading, and was thoroughly exhausted at the conclusion of the exam.
- Due to the length of the questions and answers, and re-reading questions and answers, I used almost all of my time allocated for the exam.
- The published Exam Objectives were very representative of the exam content (shockingly so) and make a very good framework for preparing for the exam.
- How an exam is scored is always a black box, but some questions do make mention of how they award partial credit, such as “1 point per part of correct answer” which I found to be a welcome change — nice to know for sure that partial credit is at least possible in some cases.
- The test engine itself is new since the last time I took an exam and seemed to work well with one exception. An exam may be made up of a combination of “reviewable” questions in which you can go back and “non-reviewable” questions in which you cannot go back. I’m sure there’s an intelligent reason for this. The way the engine handled this, however, is to go through the reviewable questions first, then allow for the standard review and change process, then proceed to the non-reviewable questions with the remaining time — only it really didn’t do a good job of explaining this process. Instead, I saw the question count, such as “50 out of 60,” hit next, then it proceeded to the review screen, which I thought was a glitch in the question count, no big deal…so I spent a bit too much time reviewing, only to have it launch into the next set of questions with not much time remaining once I finished review. It’d work a bit better if the non-reviewable questions came first or it gave a can’t-miss screen of explanation to allow for better budgeting of time.
- The question types are made available here . Not all types will necessarily be present in all exams and the Build List is definitely a type to be very familiar with, as it seems quite popular.
I’m currently eagerly awaiting my results, and probably will be still for some time as the beta period has just closed. Should I not pass, I look forward to attempting it again, possibly in the new online proctored format, which I’m curious to try out.
Most introductions start with an explanation of cubes and MDX and then work their way toward the syntax and structure of a query and then start to cover the various syntax options in increasing difficulty. While that is a great approach, sometimes you just need to get going right away. Fortunately, MDX actually has a few good visual methods of querying an existing cube to access data without necessarily needing to know the syntax and theory (however, that doesn’t hurt!). This post is going to cover a few of the visual methods of querying a multidimensional analysis services cube.
All that being said, it does still help to understand the structure of your data and the various relationships before trying to query it. I’m using the Adventure Works 2014 Data Warehouse and Analysis Services data sets (previously discussed here). The Adventure Works 2014 Data Warehouse contains a number of different fact tables and related dimensions. I primarily use FactInternetSales (and its related dimensions) which is data for the Internet Sales of Adventure Works. The diagram for the relational data warehouse database can be seen below.
Adventure Works 2014 Data Warehouse Internet Sales diagram
This post serves as a full reference of all of the resources I used while on my journey to learn MDX / SSAS Multidimensional. I’ll try to keep this post updated with an index of resources as I consume them. Some will be free, some will not, I’ll note which are which.
Recently I had the need to start learning MDX to query against an existing Microsoft SQL Server Analysis Services (SSAS) cube. As this can be a long and difficult journey, I thought it’d be useful to make notes about specific things I learn as well as to list and review resources I find along away. I will be paying special attention to using MDX with SQL Server Reporting Services (SSRS) as this is a very common usage scenario, but one in which there are sparse learning resources.
I’ll try to keep this post updated with an index of resources as I consume them. Some will be free, some will not, I’ll note which are which. But first, some housekeeping:
SQL Saturday Roundup: #477 – Atlanta, GA (BI Edition)
This past Saturday marked a great landmark for Atlanta — the first “BI Edition” SQL Saturday, and, for many of us, our first SQL Saturday of 2016! Atlanta has been hosting a regular SQL Saturday event for many years now, always with tremendous attendance. Based on the amount of interest in the regular SQL Saturday (usually around May of each year), it was great to see a BI focused edition launched.
With the regular event in the Spring, having this event in January was a great way to space out the two Atlanta events probably about as equally as they could be. And, for a first event, it seemed to be a tremendous success with a registration wait list and around 300 in attendance.
The event was held at the local Microsoft facility in Alpharetta, GA, where the monthly Atlanta MDF user group meetings are held. All told, the facility was a pretty good choice of venue (and definitely a convenient location), but suffered from some overcrowding. The facility had about half the sessions in roughly classroom sized rooms and half the sessions in much smaller conference room sized rooms. While these smaller rooms made for an interesting and more intimate setting, they ultimately filled up very quickly.
Unlike many SQL Saturdays, the event kicked off with an opening keynote and presentation in the large room (multiple rooms joined together technically). I enjoy it when a SQL Saturday begins with some sort of all-attendee opening remarks, it provides nice symmetry to the event (which always has a final closing remarks session), so hopefully more events will adopt this.
Dandy Weyn presenting the opening remarks at SQL Saturday Atlanta 2016, BI Edition.
Recently, I had the need to analyze phone call data to answer questions such as how many phone calls were received in a given day and how frequently voicemail answered instead of a live person. In this scenario, I was fortunate enough to have a fairly accessible phone system to work with — a Cisco UC520. While this guide is specific to working with a Cisco UC520 device, most Cisco phone systems (the UC series or anything utilizing CUE/CME) should be pretty comparable.
So, you want to be able to analyze Cisco call data? Well, you’re in luck! There are three major steps to this process:
- Extract raw call data from the phone system
- Capture the exported data and interpret it
- Insert it into SQL Server and perform reporting
This won’t be a complete step-by-step guide, but I’ll try to hit all the high points and am always open to questions.