Majilis 2019
3 min read

Majilis 2019

I had some fun on elections day...
Majilis 2019

Late in the afternoon of Elections Day, @kudanai posted an elections result scraper that scraped https://results.elections.gov.mv

I got excited as I usually do when I see a scraper or an API that I can integrate with a Telegram bot. Hurried through lunch, got the terminal out started testing the script out.

Initially, I had a few issues running the script as it was made for Python 3.6 and I was running Python 3.5. I made a few minor changes to the url and variable concatenation which solved the issue for me. Other than that, I had to upgrade my Node.js as I had recently uninstalled all versions of it except the one needed to run Ghost. Changed the print method to print each of the candidate details separately and the summary at the end.

Since, the Elections site was under Cloudflare and I was running the script on my server, the script returned a captcha error every now and then. Wasn't really bothered to fix it and ended up integrating the script with Node-red.

Around, 8pm, I realized the Elections isn't going to update their results any time soon and I should switch to a different site. Mihaaru looked great. But they were doing something like /constituency/6, 6 being the id they gave to the constituency. This meant I had to map constituency codes with their ids to get started. Sun, on the other hand, looked much easier, since they had their URLs with the constituency code. So I wrote a simple Python script to scrape it. Yes, it's Python 2. Yes,sys.argv. I can't move on. Leave me alone!

#-*- coding: utf-8 -*-

#libraries
import urllib2
from bs4 import BeautifulSoup
from bs4 import SoupStrainer

import sys

try:
    constituency = sys.argv[1]
except:
    print 'Enter Constituency'
    sys.exit()

#Soup Parse
url = urllib2.Request("https://sun.mv/majilis2019/constituency/"+constituency, None, {'User-agent': 'Mozilla/5.0 (Macintosh; IntelMac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36'})
openurl = urllib2.urlopen(url).read()
soup = BeautifulSoup(openurl,'lxml')

stats = soup.find('div',class_='stats')
results = soup.find('div',class_='results')
candidates = results.find_all('div',class_='item')

for candidate in candidates:
   name = candidate.find('span',class_='candidate-name').text
   candidate_no =  candidate.find('span',class_='candidate-no').text
   party =  candidate.find('span',class_='party').text.strip()
   percentage =  candidate.find('span',class_='percent').text
   votes =  candidate.find('span',class_='votes').text.strip()
   
   print 'Name: ' + name
   print 'Candidate No: ' + candidate_no
   print 'Party: ' + party
   print 'Percentage: ' + percentage
   print 'Votes: ' + votes
   print '\n'

for stat in stats:
    statdetail = stat.find('span')
    if type(statdetail) != int:
        value = stat.find('div',class_='val')
        print statdetail.text + ': ' + value.text.strip() 
        

Don't comment about the stats. I did what I had to do to finish it in 5 minutes.

Now, days later, I realized that elections data is updated and Sun is behind. You know what I do now? I made another script. To download the .json files off of Elections every day. Keeping my files upto date and also serving them locally without having to go through all the Cloudflare bullshit. Technically, the download is still using cfscraper but umm...

Here is the script. Don't tell me to use bash or whatever fancy method. Python2 calls Python3. Some next level code. I wrote this in about 3 minutes. Not kidding, @athfan timed it!

import subprocess
import csv

with open('const.csv') as csv_file:
    csv_reader = csv.reader(csv_file, delimiter='\n')
    for row in csv_reader:
        command = 'python3 cfdl.py -d curl -u https://results.elections.gov.mv/data/' + row[0] + '.json'
        process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
        process.wait()

The const.csv is a csv file containing the constituency codes.

I'll be posting these scripts to GitHub sometime soon and updating this with the links. PM me @PhoenixAtom if you need them urgently.

So with lessons learned, I won't be trusting election's snail speed updates or starting to code the script 1 hour before voting closes.

Next election that comes up, I'll be two steps ahead. If I don't forget or actually care.