I need a media conversion worker written in Node.JS or Python
€30-250 EUR
Dibatalkan
Dibuat sekitar 6 tahun yang lalu
€30-250 EUR
Dibayar ketika dikirim
We're looking for a back-end developer to create an AWS Beanstalk worker script able to receive and process jobs from the attached AWS SQS queue. This script in Node.JS or Python will receive metadata of a file uploaded on S3 from a user and will convert the files in proxied versions, save them to S3 as separate files and notify the "completed" or "failed" status to an HTTP external JSON API.
Input files will be video and video (you can use something like FFMpeg); images and PDF (you can use graphicsmagick).
Here's the outline:
1 - A message is pushed to the AWS SQS queue containing: the content ID (as UUID4), the file-hash of the physical object uploaded on the S3 bucket to convert, the extension, and format of the original file and if this conversion will be complete or "thumbnail-only".
2 - The Worker script will accept the AWS SQS message and process the file. Here the transcoding needed to be supported:
If the input file is Video (MOV, MP4, MKV) to output files: Thumbnail (PNG 300px), Hi-Quality (1080p MP4 H264), Mid-Quality (720p MP4 H264) and Low-Quality (480p MP4 H264)
If the input file is Audio (WAV, AIFF, MP3) to output files: Waveform (SVG), Hi-Quality (320kbps MP3), Mid-Quality (256kbps MP3) and Low-Quality (192kbps MP3)
If the input file is Image (JPG, PNG, PSD, EPS, TIFF, DNG) to output files: Thumbnail (PNG 300px), Hi-Quality (2000px JPG), Mid-Quality (1000p JPG)
Input File is PDF to output files: Thumbnail (PNG 300px) and Hi-Quality (2000px JPG)
3 - Once the conversion is completed the Worker needs to send a JSON API POST to a WebHook informing which proxied version of the Content ID has been generated or which error happened in which version.
You can integrate any external open source library to speed up this project.
Hi.
I read the project description and got interested in the project. I do plan to make such script in python which works great with what you need to accomplish.
I'm not sure I get what you call "proxied versions" of the file, can you elaborate on that ?
As I understand the AWS Beanstalk worker script will already have access to S3 bucket and we will only uploaded the newly create files? Also, the bucket name we receive as metadata? How do we create the name of new files.? As I understand we will get the name and bucket from the initial file and then new created files will have the same bucket and some prefix in the initial name? Is the webhook API also AWS based or is smth external ? Do we need any auth for it?
Thx, this are just basic questions that came to mind we can discuss more once we get in contact.
Thx and hope we will collaborate.
Hi There
First of all your approach is way too complex.
This would've been quite simpler just using AWS ElasticTranscoder (and removing AWS SQS and AWS BeanStalk) but following changes in your architecture.
1. Remove AWS SQS & store all metadata of #1 into AWS S3 metadata itself.
2. Trigger AWS Lambda whenever a file is uploaded to AWS S3 bucket (using realtime push event) and route it to the desired transcoding by analysing the extension and the file type.
3. Integrate AWS SNS with ElasticTranscoder to send the status of conversion to HTTP URL via POST.
I'm available to start right away.
Cheers
Joy
Hi
I am Shafayat, I am an M.E.A.N. stack developer.
I believe I can provide you exactly what you want and more.
Let’s talk for a minute, you won’t regret it, I promise.
Cheers