File size: 45,597 Bytes
26e6f31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
---
title: Logger
description: Core utility
---

Logger provides an opinionated logger with output structured as JSON.

## Key features

* Capture key fields from Lambda context, cold start and structures logging output as JSON
* Log Lambda event when instructed (disabled by default)
* Log sampling enables DEBUG log level for a percentage of requests (disabled by default)
* Append additional keys to structured log at any point in time

## Getting started

???+ tip
    All examples shared in this documentation are available within the [project repository](https://github.com/aws-powertools/powertools-lambda-python/tree/develop/examples){target="_blank"}.

Logger requires two settings:

| Setting           | Description                                                         | Environment variable      | Constructor parameter |
| ----------------- | ------------------------------------------------------------------- | ------------------------- | --------------------- |
| **Logging level** | Sets how verbose Logger should be (INFO, by default)                | `POWERTOOLS_LOG_LEVEL`    | `level`               |
| **Service**       | Sets **service** key that will be present across all log statements | `POWERTOOLS_SERVICE_NAME` | `service`             |

There are some [other environment variables](#environment-variables) which can be set to modify Logger's settings at a global scope.

```yaml hl_lines="12-13" title="AWS Serverless Application Model (SAM) example"
--8<-- "examples/logger/sam/template.yaml"
```

### Standard structured keys

Your Logger will include the following keys to your structured logging:

| Key                        | Example                               | Note                                                                                                                                 |
| -------------------------- | ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
| **level**: `str`           | `INFO`                                | Logging level                                                                                                                        |
| **location**: `str`        | `collect.handler:1`                   | Source code location where statement was executed                                                                                    |
| **message**: `Any`         | `Collecting payment`                  | Unserializable JSON values are casted as `str`                                                                                       |
| **timestamp**: `str`       | `2021-05-03 10:20:19,650+0000`        | Timestamp with milliseconds, by default uses default AWS Lambda timezone (UTC)                                                       |
| **service**: `str`         | `payment`                             | Service name defined, by default `service_undefined`                                                                                 |
| **xray_trace_id**: `str`   | `1-5759e988-bd862e3fe1be46a994272793` | When [tracing is enabled](https://docs.aws.amazon.com/lambda/latest/dg/services-xray.html){target="_blank"}, it shows X-Ray Trace ID |
| **sampling_rate**: `float` | `0.1`                                 | When enabled, it shows sampling rate in percentage e.g. 10%                                                                          |
| **exception_name**: `str`  | `ValueError`                          | When `logger.exception` is used and there is an exception                                                                            |
| **exception**: `str`       | `Traceback (most recent call last)..` | When `logger.exception` is used and there is an exception                                                                            |

### Capturing Lambda context info

You can enrich your structured logs with key Lambda context information via `inject_lambda_context`.

=== "inject_lambda_context.py"

    ```python hl_lines="7"
    --8<-- "examples/logger/src/inject_lambda_context.py"
    ```

=== "inject_lambda_context_output.json"

    ```json hl_lines="8-12 17-20"
    --8<-- "examples/logger/src/inject_lambda_context_output.json"
    ```

When used, this will include the following keys:

| Key                             | Example                                                                                              |
| ------------------------------- | ---------------------------------------------------------------------------------------------------- |
| **cold_start**: `bool`          | `false`                                                                                              |
| **function_name** `str`         | `example-powertools-HelloWorldFunction-1P1Z6B39FLU73`                                                |
| **function_memory_size**: `int` | `128`                                                                                                |
| **function_arn**: `str`         | `arn:aws:lambda:eu-west-1:012345678910:function:example-powertools-HelloWorldFunction-1P1Z6B39FLU73` |
| **function_request_id**: `str`  | `899856cb-83d1-40d7-8611-9e78f15f32f4`                                                               |

### Logging incoming event

When debugging in non-production environments, you can instruct Logger to log the incoming event with `log_event` param or via `POWERTOOLS_LOGGER_LOG_EVENT` env var.

???+ warning
	This is disabled by default to prevent sensitive info being logged

```python hl_lines="7" title="Logging incoming event"
--8<-- "examples/logger/src/log_incoming_event.py"
```

### Setting a Correlation ID

You can set a Correlation ID using `correlation_id_path` param by passing a [JMESPath expression](https://jmespath.org/tutorial.html){target="_blank" rel="nofollow"}, including [our custom JMESPath Functions](../utilities/jmespath_functions.md#powertools_json-function).

???+ tip
	You can retrieve correlation IDs via `get_correlation_id` method.

=== "set_correlation_id.py"

    ```python hl_lines="7"
    --8<-- "examples/logger/src/set_correlation_id.py"
    ```

=== "set_correlation_id_event.json"

    ```json hl_lines="3"
    --8<-- "examples/logger/src/set_correlation_id_event.json"
    ```

=== "set_correlation_id_output.json"

    ```json hl_lines="12"
    --8<-- "examples/logger/src/set_correlation_id_output.json"
    ```

#### set_correlation_id method

You can also use `set_correlation_id` method to inject it anywhere else in your code. Example below uses [Event Source Data Classes utility](../utilities/data_classes.md){target="_blank"} to easily access events properties.

=== "set_correlation_id_method.py"

    ```python hl_lines="11"
    --8<-- "examples/logger/src/set_correlation_id_method.py"
    ```

=== "set_correlation_id_method.json"

    ```json hl_lines="3"
    --8<-- "examples/logger/src/set_correlation_id_method.json"
    ```

=== "set_correlation_id_method_output.json"

    ```json hl_lines="7"
    --8<-- "examples/logger/src/set_correlation_id_method_output.json"
    ```

#### Known correlation IDs

To ease routine tasks like extracting correlation ID from popular event sources, we provide [built-in JMESPath expressions](#built-in-correlation-id-expressions).

=== "set_correlation_id_jmespath.py"

    ```python hl_lines="2 8"
    --8<-- "examples/logger/src/set_correlation_id_jmespath.py"
    ```

=== "set_correlation_id_jmespath.json"

    ```json hl_lines="3"
    --8<-- "examples/logger/src/set_correlation_id_jmespath.json"
    ```

=== "set_correlation_id_jmespath_output.json"

    ```json hl_lines="12"
    --8<-- "examples/logger/src/set_correlation_id_jmespath_output.json"
    ```

### Appending additional keys

???+ info "Info: Custom keys are persisted across warm invocations"
    Always set additional keys as part of your handler to ensure they have the latest value, or explicitly clear them with [`clear_state=True`](#clearing-all-state).

You can append additional keys using either mechanism:

* New keys persist across all future log messages via `append_keys` method
* Add additional keys on a per log message basis as a keyword=value, or via `extra` parameter
* New keys persist across all future logs in a specific thread via `thread_safe_append_keys` method. Check [Working with thread-safe keys](#working-with-thread-safe-keys) section.

#### append_keys method

???+ warning
    `append_keys` is not thread-safe, use [thread_safe_append_keys](#appending-thread-safe-additional-keys) instead

You can append your own keys to your existing Logger via `append_keys(**additional_key_values)` method.

=== "append_keys.py"

    ```python hl_lines="12"
    --8<-- "examples/logger/src/append_keys.py"
    ```

=== "append_keys_output.json"

    ```json hl_lines="7"
    --8<-- "examples/logger/src/append_keys_output.json"
    ```

???+ tip "Tip: Logger will automatically reject any key with a None value"
    If you conditionally add keys depending on the payload, you can follow the example above.

    This example will add `order_id` if its value is not empty, and in subsequent invocations where `order_id` might not be present it'll remove it from the Logger.

#### ephemeral metadata

You can pass an arbitrary number of keyword arguments (kwargs) to all log level's methods, e.g. `logger.info, logger.warning`.

Two common use cases for this feature is to enrich log statements with additional metadata, or only add certain keys conditionally.

!!! info "Any keyword argument added will not be persisted in subsequent messages."

=== "append_keys_kwargs.py"

    ```python hl_lines="8"
    --8<-- "examples/logger/src/append_keys_kwargs.py"
    ```

=== "append_keys_kwargs_output.json"

    ```json hl_lines="7"
    --8<-- "examples/logger/src/append_keys_kwargs_output.json"
    ```

#### extra parameter

Extra parameter is available for all log levels' methods, as implemented in the standard logging library - e.g. `logger.info, logger.warning`.

It accepts any dictionary, and all keyword arguments will be added as part of the root structure of the logs for that log statement.

!!! info "Any keyword argument added using `extra` will not be persisted in subsequent messages."

=== "append_keys_extra.py"

    ```python hl_lines="9"
    --8<-- "examples/logger/src/append_keys_extra.py"
    ```

=== "append_keys_extra_output.json"

    ```json hl_lines="7"
    --8<-- "examples/logger/src/append_keys_extra_output.json"
    ```

### Removing additional keys

You can remove additional keys using either mechanism:

* Remove new keys across all future log messages via `remove_keys` method
* Remove keys persist across all future logs in a specific thread via `thread_safe_remove_keys` method. Check [Working with thread-safe keys](#working-with-thread-safe-keys) section.

???+ danger
    Keys added by `append_keys` can only be removed by `remove_keys` and thread-local keys added by `thread_safe_append_keys` can only be removed by `thread_safe_remove_keys` or `thread_safe_clear_keys`. Thread-local and normal logger keys are distinct values and can't be manipulated interchangeably.

#### remove_keys method

You can remove any additional key from Logger state using `remove_keys`.

=== "remove_keys.py"

    ```python hl_lines="11"
    --8<-- "examples/logger/src/remove_keys.py"
    ```

=== "remove_keys_output.json"

    ```json hl_lines="7"
    --8<-- "examples/logger/src/remove_keys_output.json"
    ```

#### Clearing all state

Logger is commonly initialized in the global scope. Due to [Lambda Execution Context reuse](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-context.html){target="_blank"}, this means that custom keys can be persisted across invocations. If you want all custom keys to be deleted, you can use `clear_state=True` param in `inject_lambda_context` decorator.

???+ tip "Tip: When is this useful?"
    It is useful when you add multiple custom keys conditionally, instead of setting a default `None` value if not present. Any key with `None` value is automatically removed by Logger.

???+ danger "Danger: This can have unintended side effects if you use Layers"
    Lambda Layers code is imported before the Lambda handler.

    This means that `clear_state=True` will instruct Logger to remove any keys previously added before Lambda handler execution proceeds.

    You can either avoid running any code as part of Lambda Layers global scope, or override keys with their latest value as part of handler's execution.

=== "clear_state.py"

    ```python hl_lines="7 10"
    --8<-- "examples/logger/src/clear_state.py"
    ```

=== "clear_state_event_one.json"

    ```json hl_lines="7"
    --8<-- "examples/logger/src/clear_state_event_one.json"
    ```

=== "clear_state_event_two.json"

    ```json hl_lines="7"
    --8<-- "examples/logger/src/clear_state_event_two.json"
    ```

### Accessing currently configured keys

You can view all currently configured keys from the Logger state using the `get_current_keys()` method. This method is useful when you need to avoid overwriting keys that are already configured.

=== "get_current_keys.py"

    ```python hl_lines="4 11"
    --8<-- "examples/logger/src/get_current_keys.py"
    ```

???+ info
    For thread-local additional logging keys, use `get_current_thread_keys` instead

### Log levels

The default log level is `INFO`. It can be set using the `level` constructor option, `setLevel()` method or by using the `POWERTOOLS_LOG_LEVEL` environment variable.

We support the following log levels:

| Level      | Numeric value | Standard logging   |
| ---------- | ------------- | ------------------ |
| `DEBUG`    | 10            | `logging.DEBUG`    |
| `INFO`     | 20            | `logging.INFO`     |
| `WARNING`  | 30            | `logging.WARNING`  |
| `ERROR`    | 40            | `logging.ERROR`    |
| `CRITICAL` | 50            | `logging.CRITICAL` |

If you want to access the numeric value of the current log level, you can use the `log_level` property. For example, if the current log level is `INFO`, `logger.log_level` property will return `20`.

=== "setting_log_level_constructor.py"

    ```python hl_lines="3"
    --8<-- "examples/logger/src/setting_log_level_via_constructor.py"
    ```

=== "setting_log_level_programmatically.py"

    ```python hl_lines="6 9 12"
    --8<-- "examples/logger/src/setting_log_level_programmatically.py"
    ```

#### AWS Lambda Advanced Logging Controls (ALC)

!!! question "When is it useful?"
    When you want to set a logging policy to drop informational or verbose logs for one or all AWS Lambda functions, regardless of runtime and logger used.

<!-- markdownlint-disable MD013 -->
With [AWS Lambda Advanced Logging Controls (ALC)](https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs.html#monitoring-cloudwatchlogs-advanced){target="_blank"}, you can enforce a minimum log level that Lambda will accept from your application code.

When enabled, you should keep `Logger` and ALC log level in sync to avoid data loss.

Here's a sequence diagram to demonstrate how ALC will drop both `INFO` and `DEBUG` logs emitted from `Logger`, when ALC log level is stricter than `Logger`.
<!-- markdownlint-enable MD013 -->

```mermaid
sequenceDiagram
    title Lambda ALC allows WARN logs only
    participant Lambda service
    participant Lambda function
    participant Application Logger

    Note over Lambda service: AWS_LAMBDA_LOG_LEVEL="WARN"
    Note over Application Logger: POWERTOOLS_LOG_LEVEL="DEBUG"

    Lambda service->>Lambda function: Invoke (event)
    Lambda function->>Lambda function: Calls handler
    Lambda function->>Application Logger: logger.error("Something happened")
    Lambda function-->>Application Logger: logger.debug("Something happened")
    Lambda function-->>Application Logger: logger.info("Something happened")
    Lambda service--xLambda service: DROP INFO and DEBUG logs
    Lambda service->>CloudWatch Logs: Ingest error logs
```

**Priority of log level settings in Powertools for AWS Lambda**

We prioritise log level settings in this order:

1. `AWS_LAMBDA_LOG_LEVEL` environment variable
2. Explicit log level in `Logger` constructor, or by calling the `logger.setLevel()` method
3. `POWERTOOLS_LOG_LEVEL` environment variable

If you set `Logger` level lower than ALC, we will emit a warning informing you that your messages will be discarded by Lambda.

> **NOTE**
>
> With ALC enabled, we are unable to increase the minimum log level below the `AWS_LAMBDA_LOG_LEVEL` environment variable value, see [AWS Lambda service documentation](https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs.html#monitoring-cloudwatchlogs-log-level){target="_blank"} for more details.

### Logging exceptions

Use `logger.exception` method to log contextual information about exceptions. Logger will include `exception_name` and `exception` keys to aid troubleshooting and error enumeration.

???+ tip
    You can use your preferred Log Analytics tool to enumerate and visualize exceptions across all your services using `exception_name` key.

=== "logging_exceptions.py"

    ```python hl_lines="15"
    --8<-- "examples/logger/src/logging_exceptions.py"
    ```

=== "logging_exceptions_output.json"

    ```json hl_lines="7-8"
    --8<-- "examples/logger/src/logging_exceptions_output.json"
    ```

#### Uncaught exceptions

!!! warning "CAUTION: some users reported a problem that causes this functionality not to work in the Lambda runtime. We recommend that you don't use this feature for the time being."

Logger can optionally log uncaught exceptions by setting `log_uncaught_exceptions=True` at initialization.

!!! info "Logger will replace any exception hook previously registered via [sys.excepthook](https://docs.python.org/3/library/sys.html#sys.excepthook){target='_blank'}."

??? question "What are uncaught exceptions?"

    It's any raised exception that wasn't handled by the [`except` statement](https://docs.python.org/3.9/tutorial/errors.html#handling-exceptions){target="_blank" rel="nofollow"}, leading a Python program to a non-successful exit.

    They are typically raised intentionally to signal a problem (`raise ValueError`), or a propagated exception from elsewhere in your code that you didn't handle it willingly or not (`KeyError`, `jsonDecoderError`, etc.).

=== "logging_uncaught_exceptions.py"

    ```python hl_lines="7"
    --8<-- "examples/logger/src/logging_uncaught_exceptions.py"
    ```

=== "logging_uncaught_exceptions_output.json"

    ```json hl_lines="7-8"
    --8<-- "examples/logger/src/logging_uncaught_exceptions_output.json"
    ```

#### Stack trace logging

By default, the Logger will automatically include the full stack trace in JSON format when using `logger.exception`. If you want to disable this feature, set `serialize_stacktrace=False` during initialization."

=== "logging_stacktrace.py"

    ```python hl_lines="7 15"
    --8<-- "examples/logger/src/logging_stacktrace.py"
    ```

=== "logging_stacktrace_output.json"

    ```json hl_lines="9-27"
    --8<-- "examples/logger/src/logging_stacktrace_output.json"
    ```

### Date formatting

Logger uses Python's standard logging date format with the addition of timezone: `2021-05-03 11:47:12,494+0000`.

You can easily change the date format using one of the following parameters:

* **`datefmt`**. You can pass any [strftime format codes](https://strftime.org/){target="_blank" rel="nofollow"}. Use `%F` if you need milliseconds.
* **`use_rfc3339`**. This flag will use a format compliant with both RFC3339 and ISO8601: `2022-10-27T16:27:43.738+00:00`

???+ tip "Prefer using [datetime string formats](https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes){target="_blank" rel="nofollow"}?"
	Use `use_datetime_directive` flag along with `datefmt` to instruct Logger to use `datetime` instead of `time.strftime`.

=== "date_formatting.py"

    ```python hl_lines="5 8"
    --8<-- "examples/logger/src/date_formatting.py"
    ```

=== "date_formatting_output.json"

    ```json hl_lines="6 13"
    --8<-- "examples/logger/src/date_formatting_output.json"
    ```

### Environment variables

The following environment variables are available to configure Logger at a global scope:

| Setting                   | Description                                                                                            | Environment variable                    | Default      |
| ------------------------- | ------------------------------------------------------------------------------------------------------ | --------------------------------------- | ------------ |
| **Event Logging**         | Whether to log the incoming event.                                                                     | `POWERTOOLS_LOGGER_LOG_EVENT`           | `false`      |
| **Debug Sample Rate**     | Sets the debug log sampling.                                                                           | `POWERTOOLS_LOGGER_SAMPLE_RATE`         | `0`          |
| **Disable Deduplication** | Disables log deduplication filter protection to use Pytest Live Log feature.                           | `POWERTOOLS_LOG_DEDUPLICATION_DISABLED` | `false`      |
| **TZ**                    | Sets timezone when using Logger, e.g., `US/Eastern`. Timezone is defaulted to UTC when `TZ` is not set | `TZ`                                    | `None` (UTC) |

[`POWERTOOLS_LOGGER_LOG_EVENT`](#logging-incoming-event) can also be set on a per-method basis, and [`POWERTOOLS_LOGGER_SAMPLE_RATE`](#sampling-debug-logs) on a per-instance basis. These parameter values will override the environment variable value.

## Advanced

### Built-in Correlation ID expressions

You can use any of the following built-in JMESPath expressions as part of [inject_lambda_context decorator](#setting-a-correlation-id).

???+ note "Note: Any object key named with `-` must be escaped"
    For example, **`request.headers."x-amzn-trace-id"`**.

| Name                          | Expression                            | Description                     |
| ----------------------------- | ------------------------------------- | ------------------------------- |
| **API_GATEWAY_REST**          | `"requestContext.requestId"`          | API Gateway REST API request ID |
| **API_GATEWAY_HTTP**          | `"requestContext.requestId"`          | API Gateway HTTP API request ID |
| **APPSYNC_RESOLVER**          | `'request.headers."x-amzn-trace-id"'` | AppSync X-Ray Trace ID          |
| **APPLICATION_LOAD_BALANCER** | `'headers."x-amzn-trace-id"'`         | ALB X-Ray Trace ID              |
| **EVENT_BRIDGE**              | `"id"`                                | EventBridge Event ID            |

### Working with thread-safe keys

#### Appending thread-safe additional keys

You can append your own thread-local keys in your existing Logger via the `thread_safe_append_keys` method

=== "thread_safe_append_keys.py"

    ```python hl_lines="11"
    --8<-- "examples/logger/src/thread_safe_append_keys.py"
    ```

=== "thread_safe_append_keys_output.json"

    ```json hl_lines="8 9 17 18"
    --8<-- "examples/logger/src/thread_safe_append_keys_output.json"
    ```

#### Removing thread-safe additional keys

You can remove any additional thread-local keys from Logger using either `thread_safe_remove_keys` or `thread_safe_clear_keys`.

Use the `thread_safe_remove_keys` method to remove a list of thread-local keys that were previously added using the `thread_safe_append_keys` method.

=== "thread_safe_remove_keys.py"

    ```python hl_lines="13"
    --8<-- "examples/logger/src/thread_safe_remove_keys.py"
    ```

=== "thread_safe_remove_keys_output.json"

    ```json hl_lines="8 9 17 18 26 34"
    --8<-- "examples/logger/src/thread_safe_remove_keys_output.json"
    ```

#### Clearing thread-safe additional keys

Use the `thread_safe_clear_keys` method to remove all thread-local keys that were previously added using the `thread_safe_append_keys` method.

=== "thread_safe_clear_keys.py"

    ```python hl_lines="13"
    --8<-- "examples/logger/src/thread_safe_clear_keys.py"
    ```

=== "thread_safe_clear_keys_output.json"

    ```json hl_lines="8 9 17 18"
    --8<-- "examples/logger/src/thread_safe_clear_keys_output.json"
    ```

#### Accessing thread-safe currently keys

You can view all currently thread-local keys from the Logger state using the `thread_safe_get_current_keys()` method. This method is useful when you need to avoid overwriting keys that are already configured.

=== "thread_safe_get_current_keys.py"

    ```python hl_lines="13"
    --8<-- "examples/logger/src/thread_safe_get_current_keys.py"
    ```

### Reusing Logger across your code

Similar to [Tracer](./tracer.md#reusing-tracer-across-your-code){target="_blank"}, a new instance that uses the same `service` name will reuse a previous Logger instance.

Notice in the CloudWatch Logs output how `payment_id` appears as expected when logging in `collect.py`.

=== "logger_reuse.py"

    ```python hl_lines="1 9 11 12"
    --8<-- "examples/logger/src/logger_reuse.py"
    ```

=== "logger_reuse_payment.py"

    ```python hl_lines="3 7"
    --8<-- "examples/logger/src/logger_reuse_payment.py"
    ```

=== "logger_reuse_output.json"

    ```json hl_lines="12"
    --8<-- "examples/logger/src/logger_reuse_output.json"
    ```
???+ note "Note: About Child Loggers"
    Coming from standard library, you might be used to use `logging.getLogger(__name__)`. This will create a new instance of a Logger with a different name.

    In Powertools, you can have the same effect by using `child=True` parameter: `Logger(child=True)`. This creates a new Logger instance named after `service.<module>`. All state changes will be propagated bi-directionally between Child and Parent.

    For that reason, there could be side effects depending on the order the Child Logger is instantiated, because Child Loggers don't have a handler.

    For example, if you instantiated a Child Logger and immediately used `logger.append_keys/remove_keys/set_correlation_id` to update logging state, this might fail if the Parent Logger wasn't instantiated.

    In this scenario, you can either ensure any calls manipulating state are only called when a Parent Logger is instantiated (example above), or refrain from using `child=True` parameter altogether.

### Sampling debug logs

Use sampling when you want to dynamically change your log level to **DEBUG** based on a **percentage of your concurrent/cold start invocations**.

You can use values ranging from `0.0` to `1` (100%) when setting `POWERTOOLS_LOGGER_SAMPLE_RATE` env var, or `sample_rate` parameter in Logger.

???+ tip "Tip: When is this useful?"
    Let's imagine a sudden spike increase in concurrency triggered a transient issue downstream. When looking into the logs you might not have enough information, and while you can adjust log levels it might not happen again.

    This feature takes into account transient issues where additional debugging information can be useful.

Sampling decision happens at the Logger initialization. This means sampling may happen significantly more or less than depending on your traffic patterns, for example a steady low number of invocations and thus few cold starts.

???+ note
	Open a [feature request](https://github.com/aws-powertools/powertools-lambda-python/issues/new?assignees=&labels=feature-request%2C+triage&template=feature_request.md&title=){target="_blank"} if you want Logger to calculate sampling for every invocation

=== "sampling_debug_logs.py"

    ```python hl_lines="6 10"
    --8<-- "examples/logger/src/sampling_debug_logs.py"
    ```

=== "sampling_debug_logs_output.json"

    ```json hl_lines="3 5 13 16 26"
    --8<-- "examples/logger/src/sampling_debug_logs_output.json"
    ```

### LambdaPowertoolsFormatter

Logger propagates a few formatting configurations to the built-in `LambdaPowertoolsFormatter` logging formatter.

If you prefer configuring it separately, or you'd want to bring this JSON Formatter to another application, these are the supported settings:

| Parameter                    | Description                                                                                                              | Default                                                       |
| ---------------------------- | ------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------- |
| **`json_serializer`**        | function to serialize `obj` to a JSON formatted `str`                                                                    | `json.dumps`                                                  |
| **`json_deserializer`**      | function to deserialize `str`, `bytes`, `bytearray` containing a JSON document to a Python obj                           | `json.loads`                                                  |
| **`json_default`**           | function to coerce unserializable values, when no custom serializer/deserializer is set                                  | `str`                                                         |
| **`datefmt`**                | string directives (strftime) to format log timestamp                                                                     | `%Y-%m-%d %H:%M:%S,%F%z`, where `%F` is a custom ms directive |
| **`use_datetime_directive`** | format the `datefmt` timestamps using `datetime`, not `time`  (also supports the custom `%F` directive for milliseconds) | `False`                                                       |
| **`utc`**                    | enforce logging timestamp to UTC (ignore `TZ` environment variable)                                                      | `False`                                                       |
| **`log_record_order`**       | set order of log keys when logging                                                                                       | `["level", "location", "message", "timestamp"]`               |
| **`kwargs`**                 | key-value to be included in log messages                                                                                 | `None`                                                        |

???+ info
    When `POWERTOOLS_DEV` env var is present and set to `"true"`, Logger's default serializer (`json.dumps`) will pretty-print log messages for easier readability.

```python hl_lines="2 7-8" title="Pre-configuring Powertools for AWS Lambda (Python) Formatter"
--8<-- "examples/logger/src/powertools_formatter_setup.py"
```

### Observability providers

!!! note "In this context, an observability provider is an [AWS Lambda Partner](https://go.aws/3HtU6CZ){target="_blank" rel="nofollow"} offering a platform for logging, metrics, traces, etc."

You can send logs to the observability provider of your choice via [Lambda Extensions](https://aws.amazon.com/blogs/compute/using-aws-lambda-extensions-to-send-logs-to-custom-destinations/){target="_blank"}. In most cases, you shouldn't need any custom Logger configuration, and logs will be shipped async without any performance impact.

#### Built-in formatters

In rare circumstances where JSON logs are not parsed correctly by your provider, we offer built-in formatters to make this transition easier.

| Provider | Formatter             | Notes                                                |
| -------- | --------------------- | ---------------------------------------------------- |
| Datadog  | `DatadogLogFormatter` | Modifies default timestamp to use RFC3339 by default |

You can use import and use them as any other Logger formatter via `logger_formatter` parameter:

```python hl_lines="2 4" title="Using built-in Logger Formatters"
--8<-- "examples/logger/src/observability_provider_builtin_formatters.py"
```

### Migrating from other Loggers

If you're migrating from other Loggers, there are few key points to be aware of: [Service parameter](#the-service-parameter), [Child Loggers](#child-loggers), [Overriding Log records](#overriding-log-records), and [Logging exceptions](#logging-exceptions).

#### The service parameter

Service is what defines the Logger name, including what the Lambda function is responsible for, or part of (e.g payment service).

For Logger, the `service` is the logging key customers can use to search log operations for one or more functions - For example, **search for all errors, or messages like X, where service is payment**.

#### Child Loggers

<center>
```mermaid
stateDiagram-v2
    direction LR
    Parent: Logger()
    Child: Logger(child=True)
    Parent --> Child: bi-directional updates
    Note right of Child
        Both have the same service
    end note
```
</center>

> Python Logging hierarchy happens via the dot notation: `service`, `service.child`, `service.child_2`
For inheritance, Logger uses a `child=True` parameter along with `service` being the same value across Loggers.

For child Loggers, we introspect the name of your module where `Logger(child=True, service="name")` is called, and we name your Logger as **{service}.{filename}**.

???+ danger
    A common issue when migrating from other Loggers is that `service` might be defined in the parent Logger (no child param), and not defined in the child Logger:

=== "logging_inheritance_bad.py"

    ```python hl_lines="1 9"
    --8<-- "examples/logger/src/logging_inheritance_bad.py"
    ```

=== "logging_inheritance_module.py"
    ```python hl_lines="1 9"
    --8<-- "examples/logger/src/logging_inheritance_module.py"
    ```

In this case, Logger will register a Logger named `payment`, and a Logger named `service_undefined`. The latter isn't inheriting from the parent, and will have no handler, resulting in no message being logged to standard output.

???+ tip
    This can be fixed by either ensuring both has the `service` value as `payment`, or simply use the environment variable `POWERTOOLS_SERVICE_NAME` to ensure service value will be the same across all Loggers when not explicitly set.

Do this instead:

=== "logging_inheritance_good.py"

    ```python hl_lines="1 9"
    --8<-- "examples/logger/src/logging_inheritance_good.py"
    ```

=== "logging_inheritance_module.py"
    ```python hl_lines="1 9"
    --8<-- "examples/logger/src/logging_inheritance_module.py"
    ```

There are two important side effects when using child loggers:

1. **Service name mismatch**. Logging messages will be dropped as child loggers don't have logging handlers.
    * Solution: use `POWERTOOLS_SERVICE_NAME` env var. Alternatively, use the same service explicit value.
2. **Changing state before a parent instantiate**. Using `logger.append_keys` or `logger.remove_keys` without a parent Logger will lead to `OrphanedChildLoggerError` exception.
    * Solution: always initialize parent Loggers first. Alternatively, move calls to `append_keys`/`remove_keys` from the child at a later stage.

=== "logging_inheritance_bad.py"

    ```python hl_lines="1 9"
    --8<-- "examples/logger/src/logging_inheritance_bad.py"
    ```

=== "logging_inheritance_module.py"

    ```python hl_lines="1 9"
    --8<-- "examples/logger/src/logging_inheritance_module.py"
    ```

#### Overriding Log records

You might want to continue to use the same date formatting style, or override `location` to display the `package.function_name:line_number` as you previously had.

Logger allows you to either change the format or suppress the following keys at initialization: `location`, `timestamp`, `xray_trace_id`.

=== "overriding_log_records.py"

    ```python hl_lines="6 10"
    --8<-- "examples/logger/src/overriding_log_records.py"
    ```

=== "overriding_log_records_output.json"

    ```json hl_lines="4"
    --8<-- "examples/logger/src/overriding_log_records_output.json"
    ```

#### Reordering log keys position

You can change the order of [standard Logger keys](#standard-structured-keys) or any keys that will be appended later at runtime via the `log_record_order` parameter.

=== "reordering_log_keys.py"

    ```python hl_lines="5 8"
    --8<-- "examples/logger/src/reordering_log_keys.py"
    ```

=== "reordering_log_keys_output.json"

    ```json hl_lines="3 10"
    --8<-- "examples/logger/src/reordering_log_keys_output.json"
    ```

#### Setting timestamp to custom Timezone

By default, this Logger and the standard logging library emit records with the default AWS Lambda timestamp in **UTC**.

<!-- markdownlint-disable MD013 -->
If you prefer to log in a specific timezone, you can configure it by setting the `TZ` environment variable. You can do this either as an AWS Lambda environment variable or directly within your Lambda function settings. [Click here](https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html#configuration-envvars-runtime){target="_blank"} for a comprehensive list of available Lambda environment variables.
<!-- markdownlint-enable MD013 -->

???+ tip
    `TZ` environment variable will be ignored if `utc` is set to `True`

=== "setting_custom_timezone.py"

    ```python hl_lines="9 12"
    --8<-- "examples/logger/src/setting_utc_timestamp.py"
    ```

    1.  if you set TZ in your Lambda function, `time.tzset()` need to be called. You don't need it when setting TZ in AWS Lambda environment variable

=== "setting_custom_timezone_output.json"

    ```json hl_lines="6 13"
    --8<-- "examples/logger/src/setting_utc_timestamp_output.json"
    ```

#### Custom function for unserializable values

By default, Logger uses `str` to handle values non-serializable by JSON. You can override this behavior via `json_default` parameter by passing a Callable:

=== "unserializable_values.py"

    ```python hl_lines="6 17"
    --8<-- "examples/logger/src/unserializable_values.py"
    ```

=== "unserializable_values_output.json"

    ```json hl_lines="4-6"
    --8<-- "examples/logger/src/unserializable_values_output.json"
    ```

#### Bring your own handler

By default, Logger uses StreamHandler and logs to standard output. You can override this behavior via `logger_handler` parameter:

```python hl_lines="7-8 10" title="Configure Logger to output to a file"
--8<-- "examples/logger/src/bring_your_own_handler.py"
```

#### Bring your own formatter

By default, Logger uses [LambdaPowertoolsFormatter](#lambdapowertoolsformatter) that persists its custom structure between non-cold start invocations. There could be scenarios where the existing feature set isn't sufficient to your formatting needs.

???+ info
    The most common use cases are remapping keys by bringing your existing schema, and redacting sensitive information you know upfront.

For these, you can override the `serialize` method from [LambdaPowertoolsFormatter](#lambdapowertoolsformatter).

=== "bring_your_own_formatter.py"

    ```python hl_lines="2-3 6 11-12 15"
    --8<-- "examples/logger/src/bring_your_own_formatter.py"
    ```

=== "bring_your_own_formatter_output.json"
	```json hl_lines="6"
    --8<-- "examples/logger/src/bring_your_own_formatter_output.json"
	```

The `log` argument is the final log record containing [our standard keys](#standard-structured-keys), optionally [Lambda context keys](#capturing-lambda-context-info), and any custom key you might have added via [append_keys](#append_keys-method) or the [extra parameter](#extra-parameter).

For exceptional cases where you want to completely replace our formatter logic, you can subclass `BasePowertoolsFormatter`.

???+ warning
    You will need to implement `append_keys`, `clear_state`, override `format`, and optionally `get_current_keys`, and `remove_keys` to keep the same feature set Powertools for AWS Lambda (Python) Logger provides. This also means tracking the added logging keys.

=== "bring_your_own_formatter_from_scratch.py"

    ```python hl_lines="6 9 11-12 15 19 23 26 38"
    --8<-- "examples/logger/src/bring_your_own_formatter_from_scratch.py"
    ```

=== "bring_your_own_formatter_from_scratch_output.json"

    ```json hl_lines="2-4"
    --8<-- "examples/logger/src/bring_your_own_formatter_from_scratch_output.json"
    ```

#### Bring your own JSON serializer

By default, Logger uses `json.dumps` and `json.loads` as serializer and deserializer respectively. There could be scenarios where you are making use of alternative JSON libraries like [orjson](https://github.com/ijl/orjson){target="_blank" rel="nofollow"}.

As parameters don't always translate well between them, you can pass any callable that receives a `dict` and return a `str`:

```python hl_lines="1 3 7-8 13" title="Using Rust orjson library as serializer"
--8<-- "examples/logger/src/bring_your_own_json_serializer.py"
```

## Testing your code

### Inject Lambda Context

When unit testing your code that makes use of `inject_lambda_context` decorator, you need to pass a dummy Lambda Context, or else Logger will fail.

This is a Pytest sample that provides the minimum information necessary for Logger to succeed:

=== "fake_lambda_context_for_logger.py"

    ```python
    --8<-- "examples/logger/src/fake_lambda_context_for_logger.py"
    ```

=== "fake_lambda_context_for_logger_module.py"

    ```python
    --8<-- "examples/logger/src/fake_lambda_context_for_logger_module.py"
    ```

???+ tip
	Check out the built-in [Pytest caplog fixture](https://docs.pytest.org/en/latest/how-to/logging.html){target="_blank" rel="nofollow"} to assert plain log messages

### Pytest live log feature

Pytest Live Log feature duplicates emitted log messages in order to style log statements according to their levels, for this to work use `POWERTOOLS_LOG_DEDUPLICATION_DISABLED` env var.

```bash title="Disabling log deduplication to use Pytest live log"
POWERTOOLS_LOG_DEDUPLICATION_DISABLED="1" pytest -o log_cli=1
```

???+ warning
    This feature should be used with care, as it explicitly disables our ability to filter propagated messages to the root logger (if configured).

## FAQ

### How can I enable boto3 and botocore library logging?

You can enable the `botocore` and `boto3` logs by using the `set_stream_logger` method, this method will add a stream handler
for the given name and level to the logging module. By default, this logs all boto3 messages to stdout.

```python hl_lines="8-9" title="Enabling AWS SDK logging"
---8<-- "examples/logger/src/enabling_boto_logging.py"
```

### How can I enable Powertools for AWS Lambda (Python) logging for imported libraries?

You can copy the Logger setup to all or sub-sets of registered external loggers. Use the `copy_config_to_registered_logger` method to do this.

!!! tip "We include the logger `name` attribute for all loggers we copied configuration to help you differentiate them."

By default all registered loggers will be modified. You can change this behavior by providing `include` and `exclude` attributes.

You can also provide optional `log_level` attribute external top-level loggers will be configured with, by default it'll use the source logger log level. You can opt-out by using `ignore_log_level=True` parameter.

```python hl_lines="10" title="Cloning Logger config to all other registered standard loggers"
---8<-- "examples/logger/src/cloning_logger_config.py"
```

### How can I add standard library logging attributes to a log record?

The Python standard library log records contains a [large set of attributes](https://docs.python.org/3/library/logging.html#logrecord-attributes){target="_blank" rel="nofollow"}, however only a few are included in Powertools for AWS Lambda (Python) Logger log record by default.

You can include any of these logging attributes as key value arguments (`kwargs`) when instantiating `Logger` or `LambdaPowertoolsFormatter`.

You can also add them later anywhere in your code with `append_keys`, or remove them with `remove_keys` methods.

=== "append_and_remove_keys.py"

    ```python hl_lines="3 8 10"
    ---8<-- "examples/logger/src/append_and_remove_keys.py"
    ```

=== "append_and_remove_keys_output.json"

    ```json hl_lines="6 15-16"
    ---8<-- "examples/logger/src/append_and_remove_keys_output.json"
    ```

For log records originating from Powertools for AWS Lambda (Python) Logger, the `name` attribute will be the same as `service`, for log records coming from standard library logger, it will be the name of the logger (i.e. what was used as name argument to `logging.getLogger`).

### What's the difference between `append_keys` and `extra`?

Keys added with `append_keys` will persist across multiple log messages while keys added via `extra` will only be available in a given log message operation.

Here's an example where we persist `payment_id` not `request_id`. Note that `payment_id` remains in both log messages while `booking_id` is only available in the first message.

=== "append_keys_vs_extra.py"

    ```python hl_lines="16 23"
    ---8<-- "examples/logger/src/append_keys_vs_extra.py"
    ```

=== "append_keys_vs_extra_output.json"

    ```json hl_lines="9-10 19"
    ---8<-- "examples/logger/src/append_keys_vs_extra_output.json"
    ```

<!-- markdownlint-disable MD013 -->
### How do I aggregate and search Powertools for AWS Lambda (Python) logs across accounts?

As of now, ElasticSearch (ELK) or 3rd party solutions are best suited to this task. Please refer to this [discussion for more details](https://github.com/aws-powertools/powertools-lambda-python/issues/460){target="_blank"}