CyteBode's solution to "Mass download list of APKs by Package Names"

Here is my solution, following the revised bounty:<\/p>\n

import<\/span> sys<\/span>\n\nfrom<\/span> bs4<\/span> import<\/span> BeautifulSoup<\/span>\nimport<\/span> requests<\/span>\n\n\nDOMAIN<\/span> =<\/span> &<\/span>quot<\/span>;<\/span>https<\/span>:<\/span>//<\/span>apkpure<\/span>.<\/span>com<\/span>&<\/span>quot<\/span>;<\/span>\nSEARCH_URL<\/span> =<\/span> DOMAIN<\/span> +<\/span> &<\/span>quot<\/span>;<\/span>/<\/span>search<\/span>?<\/span>q<\/span>=%<\/span>s<\/span>&<\/span>quot<\/span>;<\/span>\n\n\ndef<\/span> download_file<\/span>(<\/span>url<\/span>,<\/span> package_name<\/span>):<\/span>\n    file_name<\/span> =<\/span> &<\/span>quot<\/span>;<\/span>%<\/span>s<\/span>.<\/span>apk<\/span>&<\/span>quot<\/span>;<\/span> %<\/span> (<\/span>package_name<\/span>.<\/span>replace<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>.&<\/span>quot<\/span>;,<\/span> &<\/span>quot<\/span>;<\/span>_<\/span>&<\/span>quot<\/span>;))<\/span>\n    local_path<\/span> =<\/span> &<\/span>quot<\/span>;<\/span>./<\/span>downloaded<\/span>/%<\/span>s<\/span>&<\/span>quot<\/span>;<\/span> %<\/span> file_name<\/span>\n\n    r<\/span> =<\/span> requests<\/span>.<\/span>get<\/span>(<\/span>url<\/span>,<\/span> stream<\/span>=<\/span>True<\/span>)<\/span>\n    print<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>Downloading<\/span> %<\/span>s<\/span>...<\/span> &<\/span>quot<\/span>;<\/span> %<\/span> file_name<\/span>,<\/span> end<\/span>=&<\/span>quot<\/span>;<\/span>&<\/span>quot<\/span>;)<\/span>\n\n    total_size<\/span> =<\/span> total_size<\/span> =<\/span> int<\/span>(<\/span>r<\/span>.<\/span>headers<\/span>.<\/span>get<\/span>(<\/span>'content-length'<\/span>,<\/span> 0<\/span>))<\/span>\n    size<\/span> =<\/span> 0<\/span>\n    print<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>%<\/span> 6.2<\/span>f<\/span>%%&<\/span>quot<\/span>;<\/span> %<\/span> 0.0<\/span>,<\/span> end<\/span>=&<\/span>quot<\/span>;<\/span>&<\/span>quot<\/span>;)<\/span>\n    with<\/span> open<\/span>(<\/span>local_path<\/span>,<\/span> &<\/span>quot<\/span>;<\/span>wb<\/span>&<\/span>quot<\/span>;)<\/span> as<\/span> f<\/span>:<\/span>\n        for<\/span> chunk<\/span> in<\/span> r<\/span>.<\/span>iter_content<\/span>(<\/span>chunk_size<\/span>=<\/span>65536<\/span>):<\/span>\n            if<\/span> chunk<\/span>:<\/span>\n                size<\/span> +=<\/span> len<\/span>(<\/span>chunk<\/span>)<\/span>\n                f<\/span>.<\/span>write<\/span>(<\/span>chunk<\/span>)<\/span>\n\n                print<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>\\b<\/span>&<\/span>quot<\/span>;<\/span> *<\/span> 7<\/span>,<\/span> end<\/span>=&<\/span>quot<\/span>;<\/span>&<\/span>quot<\/span>;)<\/span>\n                print<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>%<\/span> 6.2<\/span>f<\/span>%%&<\/span>quot<\/span>;<\/span> %<\/span> (<\/span>size<\/span> /<\/span> total_size<\/span> *<\/span> 100<\/span>),<\/span> end<\/span>=&<\/span>quot<\/span>;<\/span>&<\/span>quot<\/span>;)<\/span>\n                sys<\/span>.<\/span>stdout<\/span>.<\/span>flush<\/span>()<\/span>\n    print<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>\\b<\/span>&<\/span>quot<\/span>;<\/span> *<\/span> 7<\/span>,<\/span> end<\/span>=&<\/span>quot<\/span>;<\/span>&<\/span>quot<\/span>;)<\/span>\n    print<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>100.00<\/span>%&<\/span>quot<\/span>;)<\/span>\n\n    return<\/span> (<\/span>local_path<\/span>,<\/span> size<\/span>)<\/span>\n\n\nif<\/span> __name__<\/span> ==<\/span> '__main__'<\/span>:<\/span>\n    output_csv<\/span> =<\/span> open<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>output<\/span>.<\/span>csv<\/span>&<\/span>quot<\/span>;,<\/span> &<\/span>quot<\/span>;<\/span>w<\/span>&<\/span>quot<\/span>;)<\/span>\n    output_csv<\/span>.<\/span>write<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>App<\/span> name<\/span>,<\/span>Package<\/span> name<\/span>,<\/span>Size<\/span>,<\/span>Location<\/span>\\n<\/span>&<\/span>quot<\/span>;)<\/span>\n\n    for<\/span> line<\/span> in<\/span> open<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>apk_list<\/span>.<\/span>txt<\/span>&<\/span>quot<\/span>;,<\/span> &<\/span>quot<\/span>;<\/span>r<\/span>&<\/span>quot<\/span>;)<\/span>.<\/span>readlines<\/span>():<\/span>\n        package_name<\/span> =<\/span> line<\/span>.<\/span>strip<\/span>()<\/span>\n\n        # Search page<\/span>\n        url<\/span> =<\/span> SEARCH_URL<\/span> %<\/span> package_name<\/span>\n        r<\/span> =<\/span> requests<\/span>.<\/span>get<\/span>(<\/span>url<\/span>)<\/span>\n\n        if<\/span> r<\/span>.<\/span>status_code<\/span> !=<\/span> 200<\/span>:<\/span>\n            print<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>Could<\/span> not<\/span> get<\/span> search<\/span> page<\/span> for<\/span> %<\/span>s<\/span>.&<\/span>quot<\/span>;<\/span> %<\/span> package_name<\/span>)<\/span>\n            continue<\/span>\n\n        soup<\/span> =<\/span> BeautifulSoup<\/span>(<\/span>r<\/span>.<\/span>text<\/span>,<\/span> &<\/span>quot<\/span>;<\/span>html<\/span>.<\/span>parser<\/span>&<\/span>quot<\/span>;)<\/span>\n\n        first_result<\/span> =<\/span> soup<\/span>.<\/span>find<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>dl<\/span>&<\/span>quot<\/span>;,<\/span> class_<\/span>=&<\/span>quot<\/span>;<\/span>search<\/span>-<\/span>dl<\/span>&<\/span>quot<\/span>;)<\/span>\n        if<\/span> first_result<\/span> is<\/span> None<\/span>:<\/span>\n            print<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>Could<\/span> not<\/span> find<\/span> %<\/span>s<\/span>&<\/span>quot<\/span>;<\/span> %<\/span> package_name<\/span>)<\/span>\n            continue<\/span>\n\n        search_title<\/span> =<\/span> first_result<\/span>.<\/span>find<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>p<\/span>&<\/span>quot<\/span>;,<\/span> class_<\/span>=&<\/span>quot<\/span>;<\/span>search<\/span>-<\/span>title<\/span>&<\/span>quot<\/span>;)<\/span>\n        search_title_a<\/span> =<\/span> search_title<\/span>.<\/span>find<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>a<\/span>&<\/span>quot<\/span>;)<\/span>\n\n        app_name<\/span> =<\/span> search_title<\/span>.<\/span>text<\/span>.<\/span>strip<\/span>()<\/span>\n        app_url<\/span> =<\/span> search_title_a<\/span>.<\/span>attrs<\/span>[<\/span>&<\/span>quot<\/span>;<\/span>href<\/span>&<\/span>quot<\/span>;]<\/span>\n\n\n        # App page<\/span>\n        url<\/span> =<\/span> DOMAIN<\/span> +<\/span> app_url<\/span>\n        r<\/span> =<\/span> requests<\/span>.<\/span>get<\/span>(<\/span>url<\/span>)<\/span>\n\n        if<\/span> r<\/span>.<\/span>status_code<\/span> !=<\/span> 200<\/span>:<\/span>\n            print<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>Could<\/span> not<\/span> get<\/span> app<\/span> page<\/span> for<\/span> %<\/span>s<\/span>.&<\/span>quot<\/span>;<\/span> %<\/span> package_name<\/span>)<\/span>\n            continue<\/span>\n\n        soup<\/span> =<\/span> BeautifulSoup<\/span>(<\/span>r<\/span>.<\/span>text<\/span>,<\/span> &<\/span>quot<\/span>;<\/span>html<\/span>.<\/span>parser<\/span>&<\/span>quot<\/span>;)<\/span>\n\n        download_button<\/span> =<\/span> soup<\/span>.<\/span>find<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>a<\/span>&<\/span>quot<\/span>;,<\/span> class_<\/span>=&<\/span>quot<\/span>;<\/span> da<\/span>&<\/span>quot<\/span>;)<\/span>\n\n        if<\/span> download_button<\/span> is<\/span> None<\/span>:<\/span>\n            print<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>%<\/span>s<\/span> is<\/span> a<\/span> paid<\/span> app<\/span>.<\/span> Could<\/span> not<\/span> download<\/span>.&<\/span>quot<\/span>;<\/span> %<\/span> package_name<\/span>)<\/span>\n            continue<\/span>\n\n        download_url<\/span> =<\/span> download_button<\/span>.<\/span>attrs<\/span>[<\/span>&<\/span>quot<\/span>;<\/span>href<\/span>&<\/span>quot<\/span>;]<\/span>\n\n\n        # Download app page<\/span>\n        url<\/span> =<\/span> DOMAIN<\/span> +<\/span> download_url<\/span>\n        r<\/span> =<\/span> requests<\/span>.<\/span>get<\/span>(<\/span>url<\/span>)<\/span>\n\n        if<\/span> r<\/span>.<\/span>status_code<\/span> !=<\/span> 200<\/span>:<\/span>\n            print<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>Could<\/span> not<\/span> get<\/span> app<\/span> download<\/span> page<\/span> for<\/span> %<\/span>s<\/span>.&<\/span>quot<\/span>;<\/span> %<\/span> package_name<\/span>)<\/span>\n            continue<\/span>\n\n        soup<\/span> =<\/span> BeautifulSoup<\/span>(<\/span>r<\/span>.<\/span>text<\/span>,<\/span> &<\/span>quot<\/span>;<\/span>html<\/span>.<\/span>parser<\/span>&<\/span>quot<\/span>;)<\/span>\n\n        download_link<\/span> =<\/span> soup<\/span>.<\/span>find<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>a<\/span>&<\/span>quot<\/span>;,<\/span> id<\/span>=&<\/span>quot<\/span>;<\/span>download_link<\/span>&<\/span>quot<\/span>;)<\/span>\n        download_apk_url<\/span> =<\/span> download_link<\/span>.<\/span>attrs<\/span>[<\/span>&<\/span>quot<\/span>;<\/span>href<\/span>&<\/span>quot<\/span>;]<\/span>\n\n        path<\/span>,<\/span> size<\/span> =<\/span> download_file<\/span>(<\/span>download_apk_url<\/span>,<\/span> package_name<\/span>)<\/span>\n\n        # Write line to output CSV<\/span>\n        escaped_app_name<\/span> =<\/span> app_name<\/span>.<\/span>replace<\/span>(<\/span>&<\/span>quot<\/span>;,<\/span>&<\/span>quot<\/span>;,<\/span> &<\/span>quot<\/span>;<\/span>_<\/span>&<\/span>quot<\/span>;)<\/span>\n        output_csv<\/span>.<\/span>write<\/span>(<\/span>&<\/span>quot<\/span>;,<\/span>&<\/span>quot<\/span>;<\/span>.<\/span>join<\/span>([<\/span>escaped_app_name<\/span>,<\/span> package_name<\/span>,<\/span> str<\/span>(<\/span>size<\/span>),<\/span> path<\/span>]))<\/span>\n        output_csv<\/span>.<\/span>write<\/span>(<\/span>&<\/span>quot<\/span>;<\/span>\\n<\/span>&<\/span>quot<\/span>;)<\/span>\n<\/pre><\/div>\n

Tested on Windows with Python 3.6. It requires requests<\/code> and BeautifulSoup<\/code>. The file containing the list of package names is just a text file with one entry per line. I tested the script with the two example package names you gave.<\/p>\n

Here is my solution, following the revised bounty: import os import os.path import sys import re from bs4 import BeautifulSoup import requests DOMAIN = "https://apkpure.com" SEARCH_URL = DOMAIN + "/search?q=%s" DOWNLOAD_DIR = "./downloaded" PACKAGE_NAME_FILE = "package_names.txt" def download_file(url, package_name): file_name = "%s.apk" % (package_name.replace(".", "_")) local_path = "./downloaded/%s" % file_name r = requests.get(url, stream=True) content_disposition = r.headers.get("content-disposition") filename = re.search(r'attachment; filename="(.*)"', content_disposition).groups() if filename: filename = filename[0] else: filename = "%s.apk" % (package_name.replace(".", "_")) local_path = os.path.normpath(os.path.join(DOWNLOAD_DIR, filename)) sys.stdout.write("Downloading %s... " % filename) total_size = total_size = int(r.headers.get('content-length', 0)) size = 0 sys.stdout.write("% 6.2f%%" % 0.0) with open(local_path, "wb") as f: for chunk in r.iter_content(chunk_size=65536): if chunk: size += len(chunk) f.write(chunk) sys.stdout.write("\b" * 7) sys.stdout.write("% 6.2f%%" % (size / total_size * 100)) sys.stdout.flush() sys.stdout.write("\b" * 7) sys.stdout.write("100.00%\n") return (local_path, size) if __name__ == '__main__': # Output CSV output_csv = open("output.csv", "w") output_csv.write("App name,Package name,Size,Location\n") # Create download directory if not os.path.exists(DOWNLOAD_DIR): os.mkdir(DOWNLOAD_DIR) elif not os.path.isdir(DOWNLOAD_DIR): print("Downloading %s... " % file_name, end="") total_size = total_size = int(r.headers.get('content-length', 0)) size = 0 print("% 6.2f%%" % 0.0, end="") with open(local_path, "wb") as f: for chunk in r.iter_content(chunk_size=65536): if chunk: size += len(chunk) f.write(chunk) print("\b" * 7, end="") print("% 6.2f%%" % (size / total_size * 100), end="") sys.stdout.flush() print("\b" * 7, end="") print("100.00%") return (local_path, size) if __name__ == '__main__': output_csv = open("output.csv", "w") output_csv.write("App name,Package name,Size,Location\n") for line in open("apk_list.txt", "r").readlines(): package_name = line.strip() # Search page url = SEARCH_URL % package_name r = requests.get(url) if r.status_code != 200: print("Could not get search page for %s." % package_name) continue soup = BeautifulSoup(r.text, "html.parser") first_result = soup.find("dl", class_="search-dl") if first_result is None: print("Could not find %s" % package_name) continue search_title = first_result.find("p", class_="search-title") search_title_a = search_title.find("a") app_name = search_title.text.strip() app_url = search_title_a.attrs["href"] # App page url = DOMAIN + app_url r = requests.get(url) if r.status_code != 200: print("Could not get app page for %s." % package_name) continue soup = BeautifulSoup(r.text, "html.parser") download_button = soup.find("a", class_=" da") if download_button is None: print("%s is not a directory." % DOWNLOAD_DIR) sys.exit(-1) for line in open(PACKAGE_NAME_FILE, "r").readlines(): package_name = line.strip() # Search page url = SEARCH_URL % package_name r = requests.get(url) if r.status_code != 200: print("Could not get search page for %s." % package_name) continue soup = BeautifulSoup(r.text, "html.parser") first_result = soup.find("dl", class_="search-dl") if first_result is None: print("Could not find %s" % package_name) continue search_title = first_result.find("p", class_="search-title") search_title_a = search_title.find("a") app_name = search_title.text.strip() app_url = search_title_a.attrs["href"] # App page url = DOMAIN + app_url r = requests.get(url) if r.status_code != 200: print("Could not get app page for %s." % package_name) continue soup = BeautifulSoup(r.text, "html.parser") download_button = soup.find("a", class_=" da") if download_button is None: print("%s is a paid app. Could not download." % package_name) continue download_url = download_button.attrs["href"] # Download app page url = DOMAIN + download_url r = requests.get(url) if r.status_code != 200: print("Could not get app download page for %s." % package_name) continue soup = BeautifulSoup(r.text, "html.parser") download_link = soup.find("a", id="download_link") download_apk_url = download_link.attrs["href"] path, size = download_file(download_apk_url, package_name) # ) # Write row to output CSV output_csv.write(",".join([ '"%s"' % app_name, '"%s"' % package_name, "%d" % size, '"%s"' % path])) output_csv.write("\n") The script requires `requests` and `bs4` (BeautifulSoup). The file containing the list of package names (`package_names.txt`) is just a text file with one entry per line. I to output CSV escaped_app_name = app_name.replace(",", "_") output_csv.write(",".join([escaped_app_name, package_name, str(size), path])) output_csv.write("\n") Tested on Windows with Python 3.6. It requires `requests` and `BeautifulSoup`. The file containing the list of package names is just a text file with one entry per line. I tested the script with the two example package names you gave. I tested the script on Windows and Ubuntu with Python 3.6. It does run with Python 2.7, but `requests` is having trouble making https requests. **Edit**: Added `mkdir` for the download directory. Added double quotes for the csv entries. Made the download function parse the filename from the header. Made the script run on Python 2.7 (although it doesn't work because of https issues with `requests`),
Here is my solution, following the revised bounty: import os import os.path import sys import re import time from bs4 import BeautifulSoup import requests DOMAIN = "https://apkpure.com" SEARCH_URL = DOMAIN + "/search?q=%s" " DOWNLOAD_DIR = "./downloaded" PACKAGE_NAME_FILEPACKAGE_NAMES_FILE = "package_names.txt" " OUTPUT_CSV = "output.csv" PROGRESS_UPDATE_DELAY = 0.25 def download_file(url, package_name): r = requests.get(url, stream=True) content_disposition = r.headers.get("content-disposition") filename = re.search(r'attachment; filename="(.*)"', content_disposition).groups() if filename: filename = filename[0] else: filename = "%s.apk" % (package_name.replace(".", "_")) local_path = os.path.normpath(os.path.join(DOWNLOAD_DIR, filename)) sys.stdout.write("Downloading %s... " % filename) total_size = int(r.headers.get('content-length', 0)) size = 0 sys.stdout.write("% 6.2f%%" % 0.0) t = time.time() with open(local_path, "wb") as f: for chunk in r.iter_content(chunk_size=65536): if chunk: size += len(chunk) f.write(chunk) nt = time.time() if nt - t >= PROGRESS_UPDATE_DELAY: sys.stdout.write("\b" * 7) sys.stdout.write("% 6.2f%%" % (100.0 * size / total_size)) sys.stdout.flush() t = int(r.headers.get('content-length', 0)) size = 0nt sys.stdout.write("% 6.2f%%" % 0.0) with open(local_path, "wb") as f: for chunk in r.iter_content(chunk_size=65536): if chunk: size += len(chunk) f.write(chunk) sys.stdout.write("\b" * 7) sys.stdout.write("% 6.2f%%" % (size / total_size * 100)) sys.stdout.flush() sys.stdout.write("\b" * 7) sys.stdout.write("100.00%\n") return (local_path, size) if __name__ == '__main__': # Output CSV output_csv = open(OUTPUT_CSV, "w") output_csv.write("App name,Package name,Size,Location\n") # Create download directory if not os.path.exists(DOWNLOAD_DIR): os.makedirs(DOWNLOAD_DIR) elif not os.path.isdir(DOWNLOAD_DIR): print("%s is not a directory." % DOWNLOAD_DIR) sys.exit(-1) for line in open(PACKAGE_NAMES_FILE, "r").readlines(): package_name = line.strip() # Search page url = SEARCH_URL % package_name r = requests.get(url) if r.status_code != 200: print("Could not get search page for %s." % package_name) continue soup = BeautifulSoup(r.text, "html.parser") first_result = soup.find("dl", class_="search-dl") if first_result is None: print("Could not find %s" % package_name) continue search_title = first_result.find("p", class_="search-title") search_title_a = search_title.find("a") app_name = search_title.text.strip() app_url = search_title_a.attrs["href"] # App page url = DOMAIN + app_url r = requests.get(url) if r.status_code != 200: print("Could not get app page for %s." % package_name) continue soup = BeautifulSoup(r.text, "html.parser") download_button = soup.find("a", class_=" da") if download_button is None: print("%s is a paid app. Could not download." % package_name) continue download_url = download_button.attrs["href"] # Download app page url = DOMAIN + download_url r = requests.get(url) if r.status_code != 200: print("Could not get app download page for %s." % package_name) continue soup = BeautifulSoup(r.text, "html.parser") download_link = soup.find("a", id="download_link") download_apk_url = download_link.attrs["href"] path, size = download_file(download_apk_url, package_name) # Write row to output.csv", "w") output_csv.write("App name,Package name,Size,Location\n") # Create download directory if not os.path.exists(DOWNLOAD_DIR): os.mkdir(DOWNLOAD_DIR) elif not os.path.isdir(DOWNLOAD_DIR): print("%s is not a directory." % DOWNLOAD_DIR) sys.exit(-1) for line in open(PACKAGE_NAME_FILE, "r").readlines(): package_name = line.strip() # Search page url = SEARCH_URL % package_name r = requests.get(url) if r.status_code != 200: print("Could not get search page for %s." % package_name) continue soup = BeautifulSoup(r.text, "html.parser") first_result = soup.find("dl", class_="search-dl") if first_result is None: print("Could not find %s" % package_name) continue search_title = first_result.find("p", class_="search-title") search_title_a = search_title.find("a") app_name = search_title.text.strip() app_url = search_title_a.attrs["href"] # App page url = DOMAIN + app_url r = requests.get(url) if r.status_code != 200: print("Could not get app page for %s." % package_name) continue soup = BeautifulSoup(r.text, "html.parser") download_button = soup.find("a", class_=" da") if download_button is None: print("%s is a paid app. Could not download." % package_name) continue download_url = download_button.attrs["href"] # Download app page url = DOMAIN + download_url r = requests.get(url) if r.status_code != 200: print("Could not get app download page for %s." % package_name) continue soup = BeautifulSoup(r.text, "html.parser") download_link = soup.find("a", id="download_link") download_apk_url = download_link.attrs["href"] path, size = download_file(download_apk_url, package_name) # Write row to output CSV output_csv.write(",".join([ '"%s"' % app_name.replace('"', '""'), '"%s"' % package_name.replace('"', '""'), "%d" % size, '"%s"' % package_name, "%d" % size, '"%s"' % path])) output_csv.replace('"', '""')])) output_csv.write("\n") The script requires `requests` and `bs4` (BeautifulSoup). The file containing the list of package names (`package_names.txt`) is just a text file with one entry per line. I tested the script with the two example package names you gave. I tested the script on Windows and Ubuntu with Python 3.6 and 2.7. **Edit 1**: Added `mkdir` for the download directory. It does run with Python 2.7, but `requests` is having trouble making https requests. **Edit**: Added double quotes for the csv entries. Made the download function parse the filename from the header. Made the script run on Python 2.7. **Edit 2**: Made the progress update only every 0.25 seconds. Fixed the integer division bug with the progress update on Python 2.7. Changed `mkdir` for the download directory. Added double quotes for the csv entries. Made the download function parse the filename from the header. Made the script run on Python 2.7 (although it doesn't work because of https issues withto `requests`),makedirs`. Added `.replace('"', '""')` to the CSV entries to escape double quotes. Minor cleanup and refactoring.

User: CyteBode

Question: Mass download list of APKs by Package Names

Back to question