We have a Slack bot running off of Lambda functions, but when they fail, we don’t have any idea. Unless we check the logs. It would be nice for us to get notified when one of our Lambda functions fails in Slack.
To do this we need to create new lambda function and add a trigger based on Cloud Watch Logs. Each Lambda function automatically writes all print statements and errors into a Cloud Watch Log group named after the lambda function. Any time there is an error Cloud Watch will create a log that starts with [ERROR]

We create a trigger based off our Cloud Watch Log group, and add a filter for whenever ERROR appears in the log. And now when ever one of our Lambda function has an error we will trigger this new lambda function

Now all we need is some code to take the compressed Event, Uncompress it, and send it to slack. I’m hardcoding the channel ID here as i want these messages to always go to my Debug channel
import json
import requests
import os
import base64
import gzip
import io
def post_message_to_slack(text, channel_id, blocks = None):
return requests.post('https://slack.com/api/chat.postMessage', {
'token': os.environ["BOT_TOKEN"],
'channel': channel_id,
'text': text,
'blocks': json.dumps(blocks) if blocks else None
}).json()
def lambda_handler(event, context):
print(f"Received event:\n{event['awslogs']['data']}\nWith context:\n{context}")
bytes_data = base64.b64decode(event['awslogs']['data'])
# Create a BytesIO object
compressed_data = io.BytesIO(bytes_data)
# Decompress the bytes using gzip
with gzip.GzipFile(fileobj=compressed_data, mode='rb') as decompress_stream:
decompressed_data = decompress_stream.read()
decompressed_data = decompressed_data.decode('utf-8')
decompressed_data = json.loads(decompressed_data)
print(decompressed_data)
location = decompressed_data['logGroup']
post_message_to_slack("Error in Github Bot " + str(location), "C**")
post_message_to_slack(decompressed_data['logEvents'][0]['message'], "C**")
Now any time a user has an error with our Bot. We are told which Lamda function had the issue, and the stack trace back in Slack, in real time.

Leave a comment