5 Minutes with Conor Halloran

Spark’s resident whiskey conor-isseur

5 Minutes with Conor Halloran

Spark’s resident whiskey conor-isseur

Our Development team works ardently behind the scenes consistently building new features and improving our platform so your teams can market and sell your new development projects. So, we figured it was about time we introduced you to the kind of development we do, and the people who build Spark.

As a software that powers the sales of some of the largest development projects in the world, it may be surprising that our Development team (Devs) is quite lean. Led by CTO & Co-founder Ryan Ilg, our Devs are a tight knit cohort that consists of Developers, Quality Assurance (QA’s) Managers and Project Managers (PM’s) who work closely with our Product and Design team to produce the suite of tools your team depends on.

Spark’s foundation is built with the Ruby programming language utilizing Ruby on Rails. This framework is primarily used for creating high-performance web platforms and requires an incredible amount of time and effort to become proficient in. Spark’s Dev team is composed of top tier engineers who are passionate about building innovative solutions – no one more so than Conor Halloran, an Intermediate Developer who has been with Spark for over five years.


We met with Conor to discuss the development of some of Spark’s newest and most awaited user-focused features: Parking and API v2. We also discussed the relationship between customer requests to the final product and becoming a new father to one of the cutest babies around. Oh, and of course what kind of whiskey he’s sipping on these days.


While real estate has traditionally been tech-averse, designing user-focused features to bring RE into the technological age is at the heart of our mission. Can you shed some light on the strength of our Dev team and how you and the team are working toward this goal?

Our culture drives this. The team is composed of some of the best people that I know and I genuinely feel honoured to work with. It’s been more than five years now and not one day has passed where I haven’t looked forward to advancing Spark with them and I believe that sentiment is shared across our departments. We have a great deal of cohesion that is key to our daily workflow and feature planning.

There isn’t any red tape for the Customer Success (CS) team to get through when wanting to connect. If a bug or question comes up from a client and CS needs a Dev's input, they can message anyone of us directly. This expedites bug squashing or providing solutions to our clients that experience unexpected behaviour.

It’s a two way street as well. If I’m working on a task to address an issue that CS reported, a quick message to those involved and I get the answers that I need.

Our Product Management (PM) teams are also top notch. I really appreciate how Spark has gone about growing that team. They’ve leveraged internal growth of some amazing talent that have experienced the product as users and who have worked with clients through CS and are now ready to help manage the growth of Spark.

Then there is the Quality Assurance (QA) team. My goodness they're great. They were absolutely integral to the development and release of APIv2. I don't know how they are able to do what they do. They have a very professional and respectful approach should any issues be found.

All this culminates into a highly efficient work flow with direct communications and informed decision making all based on the foundation of mutual respect and admiration.

That’s huge for life as a Dev. There can be cases where tasks/projects have unexpected issues or scope inflations that result in delays. Instead of being hammered to hit deadlines that are no longer realistic, there is a support system of CS, PM, QA and Product that understand Spark and are keen to assist and ride over any speed bumps along the way.

You’ve been here for 5 years, what are some of the ways you’ve seen the company or product evolve during your tenure?

The company has seen a lot and very little change. Aside from the obvious growth in personnel numbers, our processes have been refined. We were a well oiled machine in the early days with solid cohesion between the departments. Had to be because it was a small team/office, we’re much bigger now, but I’d say we’re moving along at an even better pace.

That is due in large part to our culture that I’ve mentioned above. We’ve integrated tools to enhance our productivity but the culture of Spark hasn’t changed. We’re all pushing to make Spark the best real estate development platform out there.

Parking is one of our largest and most anticipated feature releases to date, how much work goes into launching a feature like this?

When we first started working on this feature, it wasn’t supposed to be much – certainly not a feature candidate. We had a parking_stall_count integer and a parking_stall_numbers string attribute on the inventory that were just going to convert to a model. This would enable assigning parking stall names in an optimized fashion as opposed to a string.

Once that was established, the floodgates of possibilities opened: adding price to Parking… What about different types/upgrades? Oh, how about Contract integration and the ability to add/remove from the Inventory via the Contract? Things escalated quickly. At the same time, Spark was growing. Conveyancing was introduced along with monetization of price attributes. Importer was created along with enhancements to our Exporter. This Parking feature needed to grow and integrate with all of this.

We take great pride in how Spark seamlessly integrates the complexities of new development business practices into one platform.

This feature needed to be fully integrated before we were going to release it. That commitment to quality and user experience caused this feature to expand significantly and experience significant delays, but the end product and user experience is all the better for it.

Is there anything about our new feature that people may find surprising?

We built this feature to help users manage their parking stalls throughout the sales process. At any point, they’ll know what can/can’t be allocated to prevent over/under selling. That shouldn’t be a surprise given the name of the feature so, I’d say that it’s gotta be how dynamic it is to the user’s workflow.

If you have all the parking stalls in the development planned out ahead of time, right down to the stall numbers and inventory allocation, you can mass create the number of stalls, enter the beginning number (along with prefix/suffix) and it will create those stalls and assign to a unit. Done.

Any Contracts generated on that unit going forward will automatically have those specific stalls assigned to them as well. Conversely if the parking stall allocation isn’t set in stone, the feature can be flexible for that. Parking stall names/numbers are not required and you can just create a batch of blank stalls to allocate to inventory as you go. Perhaps your development is already mid/late cycle, this feature can be enabled and integrated to help fine tune the allocation of the remaining stalls.

It’s super cool how flexible/dynamic we built the ParkingStall Management feature.

How does parking integrate with the rest of Spark?

It has been integrated into everything ranging from Inventory/Contracts to Reports/Exports/Imports. When utilizing this feature, you’ll gain so much more feedback and control during the sales process.

Parking is very much dependent on the associated unit, however it also has its own ecosystem of types and upgrades that can be managed. Administrators can set limits on how many of each type or upgrade can be sold on the project. If these types/upgrades are allocated ahead of time, they cannot sell anymore. If they aren’t, the limit tracker applies on the Contract level in real time so if a new Contract is generated that takes the last EV stall, any future Contract will not be able to add that upgrade.

Your son Michael was born in early 2021, after our team had fully transitioned to remote work. How have both of these changes influenced your understanding of routine and productivity, and more importantly, how did your pup Khalessi feel about it?

That was an exhausting period. Working from home with a baby suffering from Colic was challenging. I just did what I always try to do when faced with difficult situations: be positive and thankful.

Would have been easy to fall off the rails: your child is screaming from digestive pain all the time and there isn't anything you can do about it. Wife is recovering from pregnancy while also being unable to help her child... meanwhile you have work and deadlines to hit. The doctors told us the Colic would eventually go away between 3-6 months, just need to ride it out (took 8 months). Reminded myself how lucky we were to have an otherwise healthy baby and that while working from home in this situation was tough, I was thankful to have that opportunity to be there to support my wife.

Time management and self-awareness of when I was most productive became integral to my daily workflow.

Recognized that the mornings are my most productive time so I would try to focus and hammer out as much as possible. Having a supportive management team at Spark was huge as well. Not just supporting my work during this time but also our CPO Cody and his wife Chin Chin giving us baby advice/hand me downs were huge quality of life improvements.

Khaleesi... She is a rock star. Had her since she was a pup and knew that I needed to train her to tolerate ear/tail pulling... eye/nose poking etc. When Michael came around, it was sweet to see a maternal side to Khaleesi kick in. So patient and gentle. Helped that Michael was also a source of instant treats/rewards if she just did nothing around him. Michael now goes up to her for "huggies" and loves being a part of her meal routine - carrying the kibble and giving her the release command: "okay, go!" She's been amazing. I couldn't dream of a better dog.

API 2.0: What is our role in continuing to update the product as the market adjusts to, and adopts, new technologies and what are the benefits of a more robust API

We go to great lengths at Spark to provide our users access to their data via any avenue they choose. Be it reports to provide actionable insights, exports to plug into any spreadsheet system or the API to connect with any third party software.

The data in Spark is yours to use.

APIv2 is an explanation point on that commitment. We’ve built a tool that provides greater access to more data, in a more performant manner all while being easier to develop for and with.

Can you describe the process of launching API 2.0?

This was an exciting opportunity to build an API that I’d want to work with.

Challenge was to make it accessible, maintainable and performant. We developed the documentation with the Postman Api platform that connects with our API and provides example queries with returned data to aid in development with our API. The vast amount of data we’re now surfacing (and continuously expanding) has really made Spark’s API an open book.

One of the challenges that we had with APIv1 was keeping up with our pace of development with the app. It was tedious to update the code and update the documentation. We needed APIv2 to be easy to work with and easily keep up with Spark. This will have huge benefits to our end user when new features/datasets are added to Spark, it’ll be easy for the Dev team to add it to the API.

What benefits do you expect the new API to deliver to our customers?

I’m super excited to see what our clients and their teams can accomplish with our API. The performance enhancements alone will be a big boost to the operations they can execute and usability/flow of their UI.

I haven’t touched on the resource permissions that we added! That will be a real quality of life improvement as you can enable/restrict access to data points per ApiKey. Before it was an all or nothing. You really needed to have confidence with whomever you were sharing the key with.

While you should still practice a level of trust there, you can specify that this APIKey should only have access to Inventory data while keeping Contact data out of reach. Better performance, permission based access and expanded resources/association data (combined with more filtering tools); honestly, the possibilities are endless.

Finally, let's finish off with a fun one, as our resident whiskey connoisseur, what are you drinking right now or what are your 3 all time favourites?

Can't do just three... how about a list of regions and my favourite from each that are annual release (aka can still buy them).

  1. Islay Whisky: Ardbeg - Uigeadail or Laphroaig 10 Cask Strength.
  2. Speyside Whisky: GlenAllachie 10 Cask Strength.
  3. Irish Whiskey: Powers - John Lane Edition (Had in flask during Wedding). Redbreast 12 Cask Strength.
  4. Japanese Whisky: Nikka - From the Barrel.
  5. Canadian Whisky:  Macaloney's Island Distillery - Kildara
  6. American Whisky: Balcones - Brimstone