While Artsy is the largest database of Contemporary Art online, it’s not exactly “big data”. To date, we have published over 500,000 artworks by more than 50,000 artists from over 4,000 galleries, 700 museums and institutions across over 40,000 shows. Our team has written thousands of articles, hosted hundreds of art fairs and a few dozen auctions. We have over 1,000 genes from the Art Genome project, too.

There’re just over a million web pages generated from this data on artsy.net. Generating sitemaps to submit to Google and other search engines for a million pages never seemed like a big deal. In this post I’ll describe 3 generations of code, including our most recent iteration that uses Apache Spark to generates static sitemap files in S3.

Read on →

Hey there, so you’ve decided to take a look at React Native? Well, last week I ran a workshop inside Artsy on React Native and Relay.

The video takes you from react-native init to having the initial structure of a View Controller based on Relay with a real working API request. The video is about 45 minutes, with inline questions.

If you wanted to just run through the notes, you could probably get it working in about 10 minutes.

Jump to YouTube for the video, or click more for a smaller inline preview, as well as all of the speakers notes to copy & paste from. There is also a full copy of the end-result at orta/Relay-Artist-Example.

Read on →

Swift became public in June 2014, by August we had started using it in Artsy. By October, we had Swift in production channelling hundreds of thousands of dollars in auction bids.

It is pretty obvious that Swift is the future of native development on Apple platforms. It was a no-brainer to then build an Apple TV app in Swift, integrated Swift-support into our key app Eigen and built non-trivial parts of that application in Swift.

We first started experimenting with React Native in February 2016, and by August 2016, we announced that Artsy moved to React Native effectively meaning new code would be in JavaScript from here onwards.

We’re regularly asked why we moved, and it was touched on briefly in our announcement but I’d like to dig in to this and try to cover a lot of our decision process. So, if you’re into understanding why a small team of iOS developers with decades of native experience switched to JavaScript, read on.

This post will cover: What are Artsy’s apps?, Swifts positives and negatives for us, React Native, and our 1-year summary.

Read on →

The Artsy web team have been early adopters of node, and for the last 4 years the stable stack for the Artsy website has been predominantly been Node + CoffeeScript + Express + Backbone. In 2016 the mobile team announced that it had moved to React Native, matching the web team as using JavaScript as the tools of their trade.

Historically we have always had two separate dev teams for building Artsy.net and the corresponding iOS app, we call them (Art) Collector Web, and Collector Mobile. By the end of 2016 we decided to merge the teams. The merger has given way to a whole plethora of ideas about what contemporary JavaScript looks like and we’ve been experimenting with finding common, natural patterns between web and native.

This post tries to encapsulate what we consider to be our consolidated stack for web/native Artsy in 2017.

TLDR: TypeScript, GraphQL, React/React Native, Relay, Yarn, Jest, and Visual Studio Code.

Read on →

Artsy’s end of year features are an annual chance to walk through highlights of the year while also exploring front-end experiments. Created in collaboration with UBS and designed by Owen Dodd, The Year In Art 2016 presents an interactive timeline of singular moments in art and culture over the past year.

2017 Year In Art Animation Sample

The piece opens with header animation, a series of transparent sliding boxes that presented a unique challenge. The finalized look is somewhat like a slinky- a stack of containers that are stretched open from the bottom, and compress again as they reach the top of the viewport, collapsing inward without ever crossing outside the screen.

Achieving this effect required animating elements in response both to the size of other elements in the viewport, and to the client’s scroll interactions, all while sitting transparently over a video background.

Read on →

We have a lot of really awesome data. Things worth exploring, and visualizing. We have an entire team devoted to it, looks like they’re hiring too. Not all of the output of the data comes from that team though, 2 years ago our Director of Product Engineering, Craig Spaeth created a static-site generator that mapped our partners around the globe. Last week I’ve been improving it.

An animated map of galleries

Projects like these happen in most companies, quick hacks for one offs that are opened 2 years later by someone completely different to build on top of it. In trying to follow the Boy Scout rule, I’ve cleaned it up and consolidated some other similar projects. This post is a rough road-map of what making this PR looked like.

Read on →

New year, new deploy process! Late last year our mobile team completed the update to Swift 3 (and thus, the update to Xcode 8). The latest version of Apple’s IDE includes a lovely feature: automating provisioning profile management! (Note: not sarcasm, the feature is really nice. Check out the WWDC video for an in-depth exploration.)

Automatic code signing settings

However, when I went to make our first automated deploy today, things didn’t work; I got a somewhat cryptic error about code signing.

Read on →

We have a lot of Open Source code. For engineers without considerable experience in the open source realm, understanding some of the copyright issues around code ownership can be tricky. I’ve been working with our CTO dB., and our senior counsel Yayoi Shionoiri on creating an open-source FAQ for engineers.

What is Open Source?

Open Source code is code that can be freely examined, used, adapted, and shared by all through a license that sets forth these principles. The only potential limitation that an Open Source license is likely to impose is that future copies of the code (whether in adapted or un-adapted form) be themselves licensed in a manner consistent with the original license. At Artsy, we are committed to making our engineering work Open Source by default. A list of our Open Source projects can be found here.

Read on →

tl;dr You can try Artsy on your Amazon Echo now, say “Alexa, enable Artsy” or see alexa.artsy.net for more info.

With its powerful automatic speech recognizer, accurate natural language understanding and outstanding text-to-speech capabilities, the Amazon Echo, nicknamed “Alexa”, always impressed me. While not the first in its category and introduced in late 2014, Alexa was the first consumer device in my home to truly enable the conversation between human and machine. It was stationary, always listening for a wake word, and clearly outperformed all previous attempts when it came to the ability to receive commands from the other side of the apartment.

Alexa knows about the weather, but it doesn’t know much about art.

In this post I’ll dig a little inside the Alexa software platform and go over the technical details of bringing Artsy to the Echo, starting with a very simple “Ask Artsy about Norman Rockwell.”

Find the Artsy skill in the Alexa app and the complete Skill code on Github. And if you just want to learn to write a skill quickly, watch @dblock live-code a skill in this video.

Read on →