Automatically Process OSS Images Using Function Compute
1. Experiment
1.1 Knowledge points
This experiment uses Alibaba Cloud Function Compute and OSS. We will write image processing code in Function Compute, then we uploaded images in OSS. Image uploading will automatically trigger code in Function Compute for automatic image processing.
Alibaba Cloud Function Compute is a fully-managed and event-driven computing service that allows you to focus on writing and uploading code without having to manage infrastructure such as servers. Function Compute prepares the computing resources for you and executes your code elastically and reliably, while offering a range of features like log queries, performance monitoring, and alarms.
1.2 Experiment process
- Prepare OSS environment
- Create functions
- Create triggers
- Demonstrate effects
1.3 Cloud resources required
1.4 Prerequisites
- Learn about Python
- Learn about OSS
2. Start the experiment environment
Click Start Lab in the upper right corner of the page to start the experiment.
.
After the experiment environment is successfully started, the system has deployed resources required by this experiment in the background, including the ECS instance, RDS instance, Server Load Balancer instance, and OSS bucket. An account consisting of the username and password for logging on to the Web console of Alibaba Cloud is also provided.

After the experiment environment is started and related resources are properly deployed, the experiment starts a countdown. You have two hours to perform experimental operations. After the countdown ends, the experiment stops, and related resources are released. During the experiment, pay attention to the remaining time and arrange your time wisely. Next, use the username and password provided by the system to log on to the Web console of Alibaba Cloud and view related resources:

Go to the logon page of Alibaba Cloud console.

Fill in the sub-user account and click Next.

Fill in the sub-user password and click Log on.

After you successfully log on to the console, the following page is displayed.

3. Prepare OSS environment
Click Products in the upper-left corner. Choose Object Storage Service.

We can see that a bucket has already been created (your bucket name may be different).

Click on the name of this bucket, select Basic Settings and click Edit in the Static Page section

Set “index.html” as the Default Homepage and click Save, as shown in the screenshot below.

Select Files and click Create Directory to create a directory named “source”.

Similarly, create the directories “processed” and “backup”.

Now the OSS environment is ready.
4. Create a Function Compute service
Select Function Compute.

Select the US(Silicon Valley) region.

设置服务名,并取消绑定日志,点击 Next。

Select Event Function and click Next.

Set up the service called “labex-service”, the function is called “labex-image-disposal”, running environment for “python2.7”, click on the Create. Please remove the check of Bind Log, the log function will not be used temporarily.


Add the code for the current function. Remove the sample code, copy the following code to enter, and click Save.
# -*- coding: utf-8 -*-
import json,oss2
def handler(event, context):
evt = json.loads(event)
endpoint = 'oss-{0}.aliyuncs.com'.format(evt['events'][0]['region'])
creds = context.credentials
auth = oss2.StsAuth(creds.accessKeyId, creds.accessKeySecret, creds.securityToken)
bucket = oss2.Bucket(auth, endpoint, evt['events'][0]['oss']['bucket']['name'])
objectName = evt['events'][0]['oss']['object']['key']
if objectName.find("dog") > -1:
style = 'image/watermark,type_d3F5LXplbmhlaQ,size_30,text_SGVsbG8sRG9nISE=,color_FFFFFF,shadow_50,t_100,g_se,x_10,y_10'
elif objectName.find("cat") > -1:
style = 'image/watermark,type_d3F5LXplbmhlaQ,size_30,text_SGVsbG8sQ2F0ISE=,color_FFFFFF,shadow_50,t_100,g_se,x_10,y_10'
else:
style = 'image/watermark,type_d3F5LXplbmhlaQ,size_30,text_SGVsbG8sTGFiZXghIQ==,color_FFFFFF,shadow_50,t_100,g_se,x_10,y_10'
newObjectName = objectName.replace("source","processed")
object_stream = bucket.get_object(objectName, process = style)
bucket.put_object(newObjectName, object_stream)
remote_stream = bucket.get_object("index.html")
content = ""
flag = 0
for line in remote_stream.read().split("\n"):
if line.find(newObjectName) > -1:
flag = 1
break
else:
content = content + line + "\n"
if flag == 0:
insert = """<div> <img src="%s" height="200" width="300" /> </div>""" % (newObjectName)
res = ""
for line in content.split("\n"):
if line.find("<!-- -->") > -1:
res = res + insert + "\n"
res = res + line + "\n"
bucket.put_object("index.html",res)
return "labex"

Select Service-function and click Create again.

Click Event Function.

Name of service select “labex-service” set when first created, and set the function name to “labex-image-delete”, click Create.Please remove the check of Bind Log, the log function will not be used temporarily.

Then copy in the following code and click Save.
# -*- coding: utf-8 -*-
import json,oss2
def handler(event, context):
evt = json.loads(event)
endpoint = 'oss-{0}.aliyuncs.com'.format(evt['events'][0]['region'])
creds = context.credentials
auth = oss2.StsAuth(creds.accessKeyId, creds.accessKeySecret, creds.securityToken)
bucket = oss2.Bucket(auth, endpoint, evt['events'][0]['oss']['bucket']['name'])
objectName = evt['events'][0]['oss']['object']['key']
old_objectName = objectName.replace("source","processed")
bucket.delete_object(old_objectName)
remote_stream = bucket.get_object("index.html")
content = ""
for line in remote_stream.read().split("\n"):
if line.find(old_objectName) == -1:
content = content + line + "\n"
bucket.put_object("index.html", content)
return "labex"

Create a third function as shown above. Select “labex-service” for the service name, set the function name to “labex-image-backup”, and copy in the following code. Note that you should replace “YOUR-BUCKET-NAME” in the following code with your own bucket name.
# -*- coding: utf-8 -*-
import json,oss2,time
from itertools import islice
def handler(event, context):
endpoint = "oss-us-west-1.aliyuncs.com"
bucket_name = "YOUR-BUCKET-NAME"
creds = context.credentials
auth = oss2.StsAuth(creds.accessKeyId, creds.accessKeySecret, creds.securityToken)
bucket = oss2.Bucket(auth, endpoint, bucket_name)
timestr = time.strftime('%Y%m%d%H%M',time.localtime(time.time()))
for b in oss2.ObjectIterator(bucket, prefix = 'processed'):
filename = b.key
if filename.find(".png") > -1:
print filename
filename_stream = bucket.get_object(filename)
bucket.put_object("backup/"+ timestr + "/" + filename, filename_stream)
return "labex"

Finally, you can see that under labex-service service, there are three functions.

Select Service Configurations , Click Modify Configuration.

Select Role Config, and click Create Role.

Click Confirm Authorization Policy.

Click Add Policy。


Click Submit.

The configuration is complete.

5. Add OSS function triggers
5.1 Create an OSS object creation trigger
Choose labex-image-disposal function,

Click Create Trigger with the image below.

Refer to below setting, set to “disposal-trigger” trigger name.

Go Quick authorize. Click authorize.


Creation is complete.


5.2 Create an OSS object deletion trigger
Select the labex-image-delete function.

Click Create Trigger.

With the trigger name “delete-trigger”, click OK.

Creation is complete.

5.3 Create an OSS object timing trigger
Select the labex-image-backup function.

Refer to the Settings in the figure below. The trigger name is “timing trigger” and click OK.

The creation is complete, but the current state is not in effect.

6. Demonstrate effects
6.1 Install nginx reverse proxy
Because the webpage type file (mimetype is text / html, extensions include htm, html, jsp, plg, htx, stm) in the bucket is accessed using the oss default domain name, Content-Disposition: ‘attachment = filename; ‘. That is, when accessing a web page type file from a browser, the file content is not displayed, but is downloaded as an attachment, but we need to show the effect on the browser.
So we need to do it through the nginx reverse proxy. In the proxy, nginx can modify the value of the Response Header so that the browser will not download the web page type file, but directly display the file content in the browser.
Click Elastic Compute Service, as shown in the following figure.

We can see one running ECS instance in Silicon Valley region.

Copy this ECS instance’s Internet IP address and remotely log on to this ECS (Ubuntu system) instance. For details of remote login, refer to login。

The default account name and password of the ECS instance:
Account name: root
Password: nkYHG890..
After successful login, enter the following command to update the apt installation source.
apt update

Enter the following command to install nginx.
apt -y install nginx

Enter the command vim default
, create a new default file, copy the following content into the file, save and exit. Please replace the YOUR-BUCKET-DOMAIN below with your own
server {
listen 80;
location / {
proxy_pass http://YOUR-BUCKET-DOMAIN;
}
proxy_hide_header 'Content-Disposition';
add_header 'Content-Disposition' 'inline';
}

Way to get YOUR-BUCKET-DOMAIN.

Enter the following command to move the configuration file to the nginx configuration directory and restart the nginx service.
mv default /etc/nginx/sites-available
service nginx restart

Enter the following command and see that port 80 has been started, indicating that nginx was successfully installed.
netstat -utnlp

6.2 effects
Click the example images to download reference images.
Decompress the downloaded zip package. We get nine images and one “index.html” file.
Click Upload to upload the “index.html” file to the root directory of your bucket

Now, let’s take a look at the roles of the “index.html” file and these directories.
The “index.html” file makes it easier to display images under the “processed” directory. The file’s content is shown as follows.
<html>
<head>
<title>Hello OSS! </title>
<meta charset="utf-8">
<style>div{display:inline-block;}</style>
</head>
<body>
<h1> hello labex!! </h1>
<!-- -->
</body>
</html>
When you upload these downloaded images to the “source” directory, the “disposal-trigger” trigger previously created in Step 6.1 will trigger
the “labex-image-disposal” function. This function will perform two tasks:
1. Process these uploaded images and store the processed images in the "processed" directory
2. Modify the index.html content and add the paths of the images added to the "processed" directory
When you delete files under the “source” directory, the trigger previously created in Step 6.2 will trigger the labex-image-delete function. This function will also perform two tasks:
- Delete images in the “processed” directory accordingly
- Modify the index.html content and delete the paths of images that are deleted from the “processed” directory
The timing-trigger previously created in Step 6.3 is a timing trigger that triggers the labex-image-backup function every three minutes. This function will back up files under the “processed” directory to the “backup” directory.
Now let’s proceed with this experiment
Click the “source” directory to access the folder, and click Upload

After files are uploaded,

click the return arrow icon shown in the screenshot above to go back to the upper level directory, and then go to the “processed” directory. As we can see, images that were uploaded to the “source” directory are also stored in the “processed” directory, indicating that the trigger has successfully triggered the “labex-image-disposal” function.

Next, we enter the following link in the browser, please pay attention to replace YOUR-ECS-PUBLIC-IP with the user’s own ECS The public IP of the instance.
http://YOUR-ECS-PUBLIC-IP:80/index.html
We can see that each uploaded image has a unique watermark automatically added

Go back to the “source” directory and delete the image “dog1”. (You can delete as many images as you want. However, you can only delete one image at a time. The batch deletion of several images will cause multiple trigger functions to be simultaneously executed on the “index.html” file. As a result, the “index.html” page may fail to show images as expected.)

Refresh the browser page. We can see the number of images displayed in the webpage is reduced by 1, indicating that the “labex-image-delete” was successfully triggered. The corresponding image under the “processed” directory is also deleted. You can go to the “processed” directory to verify.

<font color='red'>Users can cut off the above result picture when they are doing the experiment and send it to the teacher, indicating that the current experiment has been completed.</font>
Now we can go back to the Function Compute service console and enable the “timing-trigger” trigger that was disabled in Step 6.3.

Return to the console of the OSS bucket and go to the “backup” directory. We can see that the directory doesn’t contain any backup files.

This is because we have specified a backup interval of three minutes. Wait for 3 minutes and we can see that a backup directory was created. This indicates that the “labex-image-backup” was successfully triggered.

Reminder:
Before you leave this lab, remember to log out your Alibaba RAM account before you click the ‘stop’ button of your lab. Otherwise you’ll encounter some issue when opening a new lab session in the same browser:

7. Experiment summary
Alibaba Cloud’s Function Compute and OSS services are used in this experiment. This experiment describes how to automatically process and backup images stored in OSS
by creating functions and OSS triggers on the Function Compute service and using the OSS image processing SDK. Using Function Compute, you only need to deploy code to Function Compute and trigger function execution in a event-driven manner before the image processing service can run as expected. This can significantly reduce M&O cost since infrastructure like servers no longer need to be purchased or managed.