AWSTemplateFormatVersion: 2010-09-09 Parameters: RepositoryBranch: Type: String Default: main RepositoryName: Type: String BuildImage: Type: String Default: 673918848628.dkr.ecr.us-west-2.amazonaws.com/m2-enterprise-build-tools:latest AllowedPattern: "[0-9]{12}.dkr.ecr.[a-zA-Z0-9-]+.amazonaws.com/[a-zA-Z0-9-]+:[a-zA-Z0-9-]+" AdminEmailAddress: Description: Email Address for sending failed pipeline notifications Type: CommaDelimitedList Default: no-reply@example.com ApprovalEmailAddress: Description: Email Address for sending approval notifications Type: CommaDelimitedList Default: no-reply@example.com ParameterStoreSuffix: Description: Suffix for Paramater Store Name Type: String RetainEnvironment: Type: String Default: false AllowedValues: - true - false Description: Do you want to retain every environment that's created as part of the pipeline. Every environment costs. Resources: M2ClientSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Security for M2 Client with outbound access SecurityGroupEgress: - CidrIp: 0.0.0.0/0 IpProtocol: "-1" Description: Allow outbound access VpcId: !Join - '' - - '{{resolve:ssm:m2cicd-vpcid-' - !Sub ${ParameterStoreSuffix} - '}}' M2ServerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Security for M2 Client with outbound access SecurityGroupIngress: - SourceSecurityGroupId: !GetAtt M2ClientSecurityGroup.GroupId IpProtocol: tcp FromPort: 6000 ToPort: 6000 Description: Allow inbound access to M2 - CidrIp: 0.0.0.0/0 IpProtocol: "-1" Description: Allow access to all as a fix until M2 supports restricted access to NLB SecurityGroupEgress: - CidrIp: 0.0.0.0/0 IpProtocol: "-1" Description: Allow outbound access VpcId: !Join - '' - - '{{resolve:ssm:m2cicd-vpcid-' - !Sub ${ParameterStoreSuffix} - '}}' M2ServerSecurityGroupIngress: Type: AWS::EC2::SecurityGroupIngress Properties: IpProtocol: "-1" SourceSecurityGroupId: !GetAtt M2ServerSecurityGroup.GroupId GroupId: !GetAtt M2ServerSecurityGroup.GroupId M2DatabaseSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Security for M2 Client with outbound access SecurityGroupIngress: - SourceSecurityGroupId: !GetAtt M2ServerSecurityGroup.GroupId IpProtocol: "-1" Description: Allow inbound access from M2 Server SecurityGroupEgress: - CidrIp: 0.0.0.0/0 IpProtocol: "-1" Description: Allow outbound access VpcId: !Join - '' - - '{{resolve:ssm:m2cicd-vpcid-' - !Sub ${ParameterStoreSuffix} - '}}' S3ArtifactRepository: DependsOn: S3BucketHandler Type: AWS::S3::Bucket Properties: BucketName: !Sub - m2-artifacts-${StackString}-${AWS::AccountId}-${AWS::Region} - StackString: !Select [0, !Split ['-', !Select [ 2, !Split ['/', !Ref 'AWS::StackId']]]] AccessControl: Private PublicAccessBlockConfiguration: BlockPublicAcls: True BlockPublicPolicy: True IgnorePublicAcls: True RestrictPublicBuckets: True VersioningConfiguration: Status: Enabled Tags: - Key: Name Value: Artifact Repository S3LogsBucket: DependsOn: S3BucketHandler Type: AWS::S3::Bucket Properties: BucketName: !Sub - m2-logs-${StackString}-${AWS::AccountId}-${AWS::Region} - StackString: !Select [0, !Split ['-', !Select [ 2, !Split ['/', !Ref 'AWS::StackId']]]] AccessControl: Private PublicAccessBlockConfiguration: BlockPublicAcls: True BlockPublicPolicy: True IgnorePublicAcls: True RestrictPublicBuckets: True VersioningConfiguration: Status: Suspended Tags: - Key: Name Value: Artifact Repository BuildFailedEventRule: Type: "AWS::Events::Rule" Properties: Description: "EventRule" EventPattern: source: - aws.codebuild detail-type: - CodeBuild Build State Change detail: build-status: - FAILED project-name: - !Ref M2Build - !Ref M2RunTests - !Ref M2DeployStaging - !Ref M2DeployProd State: ENABLED Targets: - Arn: !Ref PipelineNotificationSNSTopic Id: PipelineNotificationTopic InputTransformer: InputTemplate: !Sub '"The action with execution id failed for the pipeline . Follow this URL to see the execution logs on CloudWatch: , or download the log files from S3: https://s3.console.aws.amazon.com/s3/object/${S3LogsBucket}?region=${AWS::Region}&prefix=CICDLogs/.gz "' InputPathsMap: execution-id: $.detail.additional-information.logs.stream-name pipeline: $.detail.additional-information.initiator logs-url: $.detail.additional-information.logs.deep-link PipelineNotificationSNSTopic: Type: AWS::SNS::Topic PipelineApprovalSNSTopic: Type: AWS::SNS::Topic PipelineNotificationTopicPolicy: Type: AWS::SNS::TopicPolicy Properties: PolicyDocument: Statement: - Sid: "PublishEventsToPipelineNotificationTopic" Effect: Allow Principal: Service: events.amazonaws.com Action: sns:Publish Resource: !Ref PipelineNotificationSNSTopic - Sid: "AccountOwnerPermission" Effect: "Allow" Principal: AWS: "*" Action: - "SNS:GetTopicAttributes" - "SNS:SetTopicAttributes" - "SNS:AddPermission" - "SNS:RemovePermission" - "SNS:DeleteTopic" - "SNS:Subscribe" - "SNS:ListSubscriptionsByTopic" - "SNS:Publish" Resource: !Ref PipelineNotificationSNSTopic Condition: StringEquals: AWS:SourceOwner: !Ref "AWS::AccountId" Topics: - !Ref PipelineNotificationSNSTopic PipelineApprovalTopicPolicy: Type: AWS::SNS::TopicPolicy Properties: PolicyDocument: Statement: - Sid: "PublishEventsToPipelineApprovalTopic" Effect: Allow Principal: Service: events.amazonaws.com Action: sns:Publish Resource: !Ref PipelineApprovalSNSTopic - Sid: "AccountOwnerPermission" Effect: "Allow" Principal: AWS: "*" Action: - "SNS:GetTopicAttributes" - "SNS:SetTopicAttributes" - "SNS:AddPermission" - "SNS:RemovePermission" - "SNS:DeleteTopic" - "SNS:Subscribe" - "SNS:ListSubscriptionsByTopic" - "SNS:Publish" Resource: !Ref PipelineApprovalSNSTopic Condition: StringEquals: AWS:SourceOwner: !Ref "AWS::AccountId" Topics: - !Ref PipelineApprovalSNSTopic M2Pipeline: Type: AWS::CodePipeline::Pipeline DependsOn: - M2DBCluster - M2SecretDBClusterAttachment Properties: RoleArn: !GetAtt CodePipelineServiceRole.Arn ArtifactStore: Type: S3 Location: !Ref S3ArtifactRepository Stages: - Name: Source Actions: - Name: Source InputArtifacts: [] ActionTypeId: Category: Source Owner: AWS Version: "1" Provider: CodeCommit OutputArtifacts: - Name: M2Source Configuration: BranchName: !Ref RepositoryBranch RepositoryName: !Ref RepositoryName PollForSourceChanges: false RunOrder: 1 - Name: Build Actions: - Name: BuildApplication ActionTypeId: Category: Build Owner: AWS Version: "1" Provider: CodeBuild OutputArtifacts: - Name: M2BuildOutput InputArtifacts: - Name: M2Source Configuration: ProjectName: !Ref M2Build Namespace: BuildVariables RunOrder: 1 - Name: DeployStaging Actions: - Name: DeployStagingEnvironment ActionTypeId: Category: Build Owner: AWS Version: "1" Provider: CodeBuild InputArtifacts: - Name: M2BuildOutput - Name: M2Source Configuration: ProjectName: !Ref M2DeployStaging PrimarySource: M2BuildOutput OutputArtifacts: - Name: M2StagingDeployOutput RunOrder: 1 - Name: ImportData ActionTypeId: Category: Build Owner: AWS Version: "1" Provider: CodeBuild Configuration: ProjectName: !Ref M2ImportData PrimarySource: M2BuildOutput InputArtifacts: - Name: M2Source - Name: M2BuildOutput - Name: M2StagingDeployOutput OutputArtifacts: - Name: M2ImportDataOutput RunOrder: 2 - Name: StartApplicationInStaging ActionTypeId: Category: Build Owner: AWS Version: "1" Provider: CodeBuild Configuration: ProjectName: !Ref M2StartApplication PrimarySource: M2BuildOutput InputArtifacts: - Name: M2Source - Name: M2BuildOutput - Name: M2StagingDeployOutput OutputArtifacts: - Name: M2StartAppOutput RunOrder: 3 - Name: TestInStaging ActionTypeId: Category: Build Owner: AWS Provider: CodeBuild Version: "1" Configuration: ProjectName: !Ref M2RunTests PrimarySource: M2BuildOutput InputArtifacts: - Name: M2Source - Name: M2BuildOutput - Name: M2StagingDeployOutput RunOrder: 4 - Name: Approval Actions: - Name: ApproveProdDeployment ActionTypeId: Category: Approval Owner: AWS Provider: Manual Version: '1' Configuration: NotificationArn: !Ref PipelineApprovalSNSTopic CustomData: "Open the above URL to review the Details of the TestInStaging action to approve/reject the deployment." RunOrder: 1 - Name: DeployProd Actions: - Name: DeployProdEnvironment ActionTypeId: Category: Build Owner: AWS Version: "1" Provider: CodeBuild InputArtifacts: - Name: M2BuildOutput - Name: M2Source Configuration: ProjectName: !Ref M2DeployProd PrimarySource: M2BuildOutput OutputArtifacts: [] RunOrder: 1 M2Build: Type: AWS::CodeBuild::Project Properties: Artifacts: Type: CODEPIPELINE Environment: ComputeType: BUILD_GENERAL1_SMALL Image: !Ref BuildImage Type: LINUX_CONTAINER ImagePullCredentialsType: SERVICE_ROLE ServiceRole: !Ref M2BuildRole LogsConfig: S3Logs: Location: !Sub ${S3LogsBucket.Arn}/CICDLogs Status: ENABLED Source: Type: CODEPIPELINE BuildSpec: | version: 0.2 env: exported-variables: - CODEBUILD_BUILD_ID phases: install: runtime-versions: python: 3.7 pre_build: commands: - echo Installing source dependencies... build: commands: - echo Build started on `date` - /start-build.sh -Dbasedir=$CODEBUILD_SRC_DIR/source -Dloaddir=$CODEBUILD_SRC_DIR/target - echo Build completed on `date` artifacts: files: - $CODEBUILD_SRC_DIR/target/** Tags: - Key: Name Value: M2 Build Phase M2RunTests: Type: 'AWS::CodeBuild::Project' Properties: Artifacts: Type: CODEPIPELINE Environment: ComputeType: 'BUILD_GENERAL1_SMALL' Image: aws/codebuild/standard:5.0 Type: 'LINUX_CONTAINER' EnvironmentVariables: - Name: JCL_FILE_NAME Type: PLAINTEXT Value: ZBNKSTMT.JCL - Name: RETAIN_ENVIRONMENT Type: PLAINTEXT Value: !Ref RetainEnvironment ServiceRole: !Ref M2TestRole VpcConfig: SecurityGroupIds: - !GetAtt M2ClientSecurityGroup.GroupId Subnets: - !Join - '' - - '{{resolve:ssm:m2cicd-subnet1-' - !Sub ${ParameterStoreSuffix} - '}}' - !Join - '' - - '{{resolve:ssm:m2cicd-subnet2-' - !Sub ${ParameterStoreSuffix} - '}}' VpcId: !Join - '' - - '{{resolve:ssm:m2cicd-vpcid-' - !Sub ${ParameterStoreSuffix} - '}}' LogsConfig: S3Logs: Location: !Sub ${S3LogsBucket.Arn}/CICDLogs Status: ENABLED Source: Type: CODEPIPELINE BuildSpec: | version: 0.2 env: variables: JCL_FILE_NAME: "ZBNKSTMT.JCL" shell: bash phases: install: runtime-versions: python: 3.7 pre_build: commands: - apt-get update -y - apt-get install jq -y - apt-get install -y s3270 - wget -O py3270.py https://raw.githubusercontent.com/py3270/py3270/master/py3270/__init__.py - JCL_RESOURCE_ID=`grep JCL_RESOURCE_ID $CODEBUILD_SRC_DIR_M2StagingDeployOutput/build-variables.txt | cut -d '=' -f 2` - M2_APPLICATION_ID=`grep M2_APPLICATION_ID $CODEBUILD_SRC_DIR_M2StagingDeployOutput/build-variables.txt | cut -d '=' -f 2` - M2_ENVIRONMENT_ID=`grep M2_ENVIRONMENT_ID $CODEBUILD_SRC_DIR_M2StagingDeployOutput/build-variables.txt | cut -d '=' -f 2` - ARTIFACT_PATH=`grep ARTIFACT_PATH $CODEBUILD_SRC_DIR_M2StagingDeployOutput/build-variables.txt | cut -d '=' -f 2` - ARTIFACT_BUCKET=`grep ARTIFACT_BUCKET $CODEBUILD_SRC_DIR_M2StagingDeployOutput/build-variables.txt | cut -d '=' -f 2` - M2_LB_ENDPOINT=`grep M2_LB_ENDPOINT $CODEBUILD_SRC_DIR_M2StagingDeployOutput/build-variables.txt | cut -d '=' -f 2` - JCL_F_NAME=`aws s3 ls s3://$ARTIFACT_BUCKET/$ARTIFACT_PATH/jcl/ | grep -iw $JCL_FILE_NAME | awk '{print $4}'` - echo "Application ID - $M2_APPLICATION_ID" - echo "Environment ID - $M2_ENVIRONMENT_ID" - echo "JCL_RESOURCE_ID - $JCL_RESOURCE_ID" - echo "M2_LB_ENDPOINT - $M2_LB_ENDPOINT" - export M2_LB_ENDPOINT - cp $CODEBUILD_SRC_DIR_M2Source/tests/test_*.py . build: commands: - python test_suite.py - BATCHJOB_EXEC_ID=`aws m2 start-batch-job --application-id $M2_APPLICATION_ID --batch-job jclFileName=$JCL_F_NAME --query executionId | sed 's/^"\(.*\)"$/\1/'` - echo "Job Execution Id - $BATCHJOB_EXEC_ID" - | counter=1 while [[ $counter -lt 40 ]] do JOB_STATUS=`aws m2 get-batch-job-execution --application-id $M2_APPLICATION_ID --execution-id $BATCHJOB_EXEC_ID --query status` echo "Status of the batch job: $JOB_STATUS" if [[ "$JOB_STATUS" == "\"Completed\"" ]] then echo "Batch job execution completed Successfully" counter=42 elif [[ "$JOB_STATUS" == "\"Failed\"" ]] then echo "Failed to run the batch job" exit 1 else sleep 15 counter=$(( $counter + 1 )) if [[ $counter -eq 40 ]] then echo "Batch job execution is still in state - $APP_STATUS. After 10 minutes " exit 1 fi fi done - echo Tests completed on `date` post_build: on-failure: CONTINUE commands: - | if [[ "$RETAIN_ENVIRONMENT" == "false" ]] then echo "Initating Cleanup..." if [[ ! -z "$M2_APPLICATION_ID" ]] then APP_STATUS=`aws m2 get-application --application-id $M2_APPLICATION_ID --query status` echo "Status of the application: $APP_STATUS" if [[ "$APP_STATUS" == "\"Starting\"" || "$APP_STATUS" == "\"Running\"" ]] then aws m2 stop-application --application-id $M2_APPLICATION_ID --query status counter=1 while [[ $counter -lt 20 ]] do APP_STATUS=`aws m2 get-application --application-id $M2_APPLICATION_ID --query status` echo "Status of the application: $APP_STATUS" if [[ "$APP_STATUS" == "\"Stopped\"" ]] then echo "Application with Id $M2_APPLICATION_ID has been stopped" counter=42 else sleep 15 counter=$(( $counter + 1 )) if [[ $counter -eq 20 ]] then echo "Application was not stopped in 5 minutes." exit 1 fi fi done fi echo "Deleting application with Id - $M2_APPLICATION_ID" aws m2 delete-application --application-id $M2_APPLICATION_ID --query status echo "Waiting for the application to be deleted" counter=1 while [[ $counter -lt 40 ]] do DELETED_APPLICATION=`aws m2 list-applications --query "applications[?applicationId=='$M2_APPLICATION_ID'].applicationId" --output text` if [[ "$DELETED_APPLICATION" == '' ]] then echo "Application with Id $M2_APPLICATION_ID has been deleted" echo "Deleting environment - $M2_ENVIRONMENT_ID" aws m2 delete-environment --environment-id "$M2_ENVIRONMENT_ID" counter=42 else echo "Waiting for the application with Id $DELETED_APPLICATION to be deleted" sleep 15 counter=$(( $counter + 1 )) if [[ $counter -eq 40 ]] then echo "Application was not deleted in 10 minutes." exit 1 fi fi done fi fi TimeoutInMinutes: 30 M2DeployStaging: Type: AWS::CodeBuild::Project Properties: TimeoutInMinutes: 30 Artifacts: Type: CODEPIPELINE Environment: ComputeType: BUILD_GENERAL1_SMALL Image: aws/codebuild/standard:5.0 Type: LINUX_CONTAINER EnvironmentVariables: - Name: ARTIFACT_BUCKET Type: PLAINTEXT Value: !Ref S3ArtifactRepository - Name: ARTIFACT_PREFIX Type: PLAINTEXT Value: bankdemo - Name: SECRET_ARN Type: PLAINTEXT Value: !Ref M2DatabaseSecret - Name: RETAIN_ENVIRONMENT Type: PLAINTEXT Value: !Ref RetainEnvironment ServiceRole: !Ref M2DeployRole LogsConfig: S3Logs: Location: !Sub ${S3LogsBucket.Arn}/CICDLogs Status: ENABLED Source: Type: CODEPIPELINE BuildSpec: !Sub - | version: 0.2 env: shell: bash phases: install: runtime-versions: python: 3.7 pre_build: commands: - apt-get update - apt-get install jq - echo "Creating the folder structure and copying files..." - rm -rf /tmp/bankdemo - mkdir -p /tmp/bankdemo/RDEF - mkdir -p /tmp/bankdemo/transaction - mkdir -p /tmp/bankdemo/xa - mkdir -p /tmp/bankdemo/jcl - find $CODEBUILD_SRC_DIR_M2Source/config -iname '*.so' -exec cp {} /tmp/bankdemo/xa/ \; - find $CODEBUILD_SRC_DIR_M2Source/config -iname 'dfhdrdat*' -exec cp {} /tmp/bankdemo/RDEF/ \; - find $CODEBUILD_SRC_DIR_M2Source/source -iname '*.jcl' -exec cp {} /tmp/bankdemo/jcl/ \; - find $CODEBUILD_SRC_DIR_M2Source/source -iname '*.prc' -exec cp {} /tmp/bankdemo/jcl/ \; # - find $CODEBUILD_SRC_DIR_M2Source/source -iname '*.ctl' -exec cp {} /tmp/bankdemo/jcl/ \; - find $CODEBUILD_SRC_DIR_M2Source/source -iname '*.txt' -exec cp {} /tmp/bankdemo/jcl/ \; - find $CODEBUILD_SRC_DIR/codebuild/output/src*/src/target -iname '*.so' -exec cp {} /tmp/bankdemo/transaction/ \; - find $CODEBUILD_SRC_DIR/codebuild/output/src*/src/target -iname '*.SO' -exec cp {} /tmp/bankdemo/transaction/ \; - find $CODEBUILD_SRC_DIR/codebuild/output/src*/src/target -iname '*.MOD' -exec cp {} /tmp/bankdemo/transaction/ \; - CB_BUILD_ID=`echo $CODEBUILD_BUILD_ID | awk -F':' '{print $2}'` - ARTIFACT_PATH="$ARTIFACT_PREFIX/$CB_BUILD_ID" - aws s3 cp --recursive /tmp/bankdemo/ s3://$ARTIFACT_BUCKET/$ARTIFACT_PATH/ - echo "Updating S3 Bucket and Prefix in the appliation defintion file" - cp $CODEBUILD_SRC_DIR_M2Source/config/application-definition-template-config.json /tmp/application-definition-template-config.json - sed -i "s/REPLAC_S3_BUCKET/$ARTIFACT_BUCKET/" /tmp/application-definition-template-config.json - sed -i "s@REPLAC_S3_PREFIX@$ARTIFACT_PATH@" /tmp/application-definition-template-config.json - sed -i "s@REPLACE_SECRET_ARN@$SECRET_ARN@g" /tmp/application-definition-template-config.json build: commands: - echo "Creating environment" - M2_ENVIRONMENT_ID=$( aws m2 create-environment --name Stage-$CB_BUILD_ID --description "Staging environment as part of build $CB_BUILD_ID" --engine-type microfocus --instance-type M2.m5.large --subnet-ids ${SubnetId1} ${SubnetId2} --security-group-ids ${SecurityGroupId} --query environmentId | sed 's/^"\(.*\)"$/\1/' ) - echo "Created environment with Id - $M2_ENVIRONMENT_ID" - M2_ENVIRONMENT_ARN=$( aws m2 get-environment --environment-id $M2_ENVIRONMENT_ID --query environmentArn ) - echo "EnvironmentARN - $M2_ENVIRONMENT_ARN" - cat /tmp/application-definition-template-config.json - echo "Creating demo application to test" - M2_APPLICATION_ID=$( aws m2 create-application --name Staging-App-$CB_BUILD_ID --description "Staging application as part of build $CB_BUILD_ID" --engine-type microfocus --definition file:///tmp/application-definition-template-config.json --query applicationId | sed 's/^"\(.*\)"$/\1/' ) - echo "Created application with Id - $M2_APPLICATION_ID" - echo ARTIFACT_PATH=$ARTIFACT_PATH > $CODEBUILD_SRC_DIR/build-variables.txt - echo ARTIFACT_BUCKET=$ARTIFACT_BUCKET >> $CODEBUILD_SRC_DIR/build-variables.txt - echo M2_ENVIRONMENT_ID=$M2_ENVIRONMENT_ID >> $CODEBUILD_SRC_DIR/build-variables.txt - echo JCL_RESOURCE_ID=$( jq '.content' /tmp/application-definition-template-config.json | sed 's/\\"/"/g' | sed 's/^"\(.*\)"$/\1/' | jq '.resources[] | select(."resource-type" == "jcl-job") | ."resource-id"' ) >> $CODEBUILD_SRC_DIR/build-variables.txt - echo M2_APPLICATION_ID=$M2_APPLICATION_ID >> $CODEBUILD_SRC_DIR/build-variables.txt - | counter=1 while [[ $counter -lt 20 ]] do ENV_STATUS=`aws m2 get-environment --environment-id $M2_ENVIRONMENT_ID --query status` echo "Status of the environment: $ENV_STATUS" if [[ "$ENV_STATUS" == "\"Available\"" ]] then echo "Deployed staging environment on `date`" counter=42 elif [[ "$ENV_STATUS" == "\"Failed\"" ]] then ENV_STATUS_REASON=`aws m2 get-environment --environment-id $M2_ENVIRONMENT_ID --query statusReason` echo "Failed to create the environment. Reason: $ENV_STATUS_REASON" exit 1 else sleep 15 counter=$(( $counter + 1 )) if [[ $counter -eq 20 ]] then echo "Environment was not created in 5 minutes." exit 1 fi fi done - | counter=1 while [[ $counter -lt 20 ]] do M2_LB_ENDPOINT=`aws m2 get-environment --environment-id $M2_ENVIRONMENT_ID --query loadBalancerArn | sed 's/^"\(.*\)"$/\1/' | cut -d ':' -f 6 | awk -v region=$AWS_REGION -F '/' '{print $3"-"$4".elb."region".amazonaws.com"}'` echo M2_LB_ENDPOINT=$M2_LB_ENDPOINT >> $CODEBUILD_SRC_DIR/build-variables.txt echo "M2 Loadbalancer URL - $M2_LB_ENDPOINT" APP_STATUS=`aws m2 get-application --application-id $M2_APPLICATION_ID --query status` echo "Status of the application: $APP_STATUS" if [[ "$APP_STATUS" == "\"Available\"" ]] then M2_APPLICATION_VERSION=`aws m2 get-application --application-id $M2_APPLICATION_ID --query latestVersion.applicationVersion` echo "Application is available with version - $M2_APPLICATION_VERSION" M2_DEPLOY_ID=`aws m2 create-deployment --application-id $M2_APPLICATION_ID --application-version $M2_APPLICATION_VERSION --environment-id $M2_ENVIRONMENT_ID --query deploymentId | sed 's/^"\(.*\)"$/\1/'` echo "Deployed application with deployment id - $M2_DEPLOY_ID" echo M2_DEPLOY_ID=$M2_DEPLOY_ID >> $CODEBUILD_SRC_DIR/build-variables.txt deploy_counter=1 while [[ $deploy_counter -lt 20 ]] do DEPLOY_STATUS=`aws m2 get-deployment --application-id $M2_APPLICATION_ID --deployment-id $M2_DEPLOY_ID --query status` echo "Status of the deployment: $DEPLOY_STATUS" if [[ "$DEPLOY_STATUS" == "\"Succeeded\"" ]] then echo "Application deployment completed" deploy_counter=42 elif [[ "$DEPLOY_STATUS" == "\"Failed\"" ]] then echo "Failed to deploy the application" exit 1 else sleep 15 deploy_counter=$(( $deploy_counter + 1 )) if [[ $deploy_counter -eq 20 ]] then echo "Application was not deployed in 5 minutes." exit 1 fi fi done counter=42 elif [[ "$APP_STATUS" == "\"Failed\"" ]] then echo "Failed to create the application" exit 1 else sleep 15 counter=$(( $counter + 1 )) if [[ $counter -eq 20 ]] then echo "Application was not created in 5 minutes." exit 1 fi fi done post_build: on-failure: CONTINUE commands: - | if [[ "$RETAIN_ENVIRONMENT" == "false" && "$CODEBUILD_BUILD_SUCCEEDING" -eq 0 ]] then echo "Building is failing. Initating Cleanup..." if [[ ! -z "$M2_APPLICATION_ID" ]] then APP_STATUS=`aws m2 get-application --application-id $M2_APPLICATION_ID --query status` echo "Status of the application: $APP_STATUS" if [[ "$APP_STATUS" == "\"Starting\"" || "$APP_STATUS" == "\"Running\"" ]] then aws m2 stop-application --application-id $M2_APPLICATION_ID --query status counter=1 while [[ $counter -lt 20 ]] do APP_STATUS=`aws m2 get-application --application-id $M2_APPLICATION_ID --query status` echo "Status of the application: $APP_STATUS" if [[ "$APP_STATUS" == "\"Stopped\"" ]] then echo "Application with Id $M2_APPLICATION_ID has been stopped" counter=42 else sleep 15 counter=$(( $counter + 1 )) if [[ $counter -eq 20 ]] then echo "Application was not stopped in 5 minutes." exit 1 fi fi done fi echo "Deleting application with Id - $M2_APPLICATION_ID" aws m2 delete-application --application-id $M2_APPLICATION_ID --query status echo "Waiting for the application to be deleted" counter=1 while [[ $counter -lt 40 ]] do DELETED_APPLICATION=`aws m2 list-applications --query "applications[?applicationId=='$M2_APPLICATION_ID'].applicationId" --output text` if [[ "$DELETED_APPLICATION" == '' ]] then echo "Application with Id $M2_APPLICATION_ID has been deleted" echo "Deleting environment - $M2_ENVIRONMENT_ID" aws m2 delete-environment --environment-id "$M2_ENVIRONMENT_ID" counter=42 else echo "Waiting for the application with Id $DELETED_APPLICATION to be deleted" sleep 15 counter=$(( $counter + 1 )) if [[ $counter -eq 40 ]] then echo "Application was not deleted in 10 minutes." exit 1 fi fi done fi fi artifacts: discard-paths: yes files: - $CODEBUILD_SRC_DIR/build-variables.txt - SubnetId1: !Join - '' - - '{{resolve:ssm:m2cicd-subnet1-' - !Sub ${ParameterStoreSuffix} - '}}' SubnetId2: !Join - '' - - '{{resolve:ssm:m2cicd-subnet2-' - !Sub ${ParameterStoreSuffix} - '}}' SecurityGroupId: !GetAtt M2ServerSecurityGroup.GroupId Tags: - Key: Name Value: M2 Deploy Staging Phase M2ImportData: Type: AWS::CodeBuild::Project Properties: TimeoutInMinutes: 40 Artifacts: Type: CODEPIPELINE Environment: ComputeType: BUILD_GENERAL1_SMALL Image: aws/codebuild/standard:5.0 Type: LINUX_CONTAINER EnvironmentVariables: - Name: RETAIN_ENVIRONMENT Type: PLAINTEXT Value: !Ref RetainEnvironment - Name: M2_DATA_STORE Type: PLAINTEXT Value: !Join - '' - - '{{resolve:ssm:m2cicd-data-bucket-' - !Sub ${ParameterStoreSuffix} - '}}' ServiceRole: !Ref M2ImportDataRole LogsConfig: S3Logs: Location: !Sub ${S3LogsBucket.Arn}/CICDLogs Status: ENABLED Source: Type: CODEPIPELINE BuildSpec: | version: 0.2 env: shell: bash phases: install: runtime-versions: python: 3.7 pre_build: commands: - apt-get update - apt-get install jq - M2_APPLICATION_ID=`grep M2_APPLICATION_ID $CODEBUILD_SRC_DIR_M2StagingDeployOutput/build-variables.txt | cut -d '=' -f 2` - echo "Application Id - $M2_APPLICATION_ID" - echo "M2_DATA_STORE - $M2_DATA_STORE" - | cat <validation.lib check_import_status() { M2_APPLICATION_ID=\$1 M2_DATASET_IMPORT_TASK_ID=\$2 TASK_TYPE=\$3 if [[ -z "\$M2_DATASET_IMPORT_TASK_ID" ]] then echo "Import \$TASK_TYPE Task Failed" exit 1 else echo "Import \$TASK_TYPE Task - \$M2_DATASET_IMPORT_TASK_ID" counter=1 while [[ \$counter -lt 20 ]] do ENV_STATUS=\`aws m2 get-data-set-import-task --application-id \$M2_APPLICATION_ID --task-id \$M2_DATASET_IMPORT_TASK_ID --query status\` echo "Status of the import \$TASK_TYPE task: \$ENV_STATUS" if [[ "\$ENV_STATUS" == "\"Completed\"" ]] then echo "Import \$TASK_TYPE completed `date`" counter=42 ENV_RESPONSE=\`aws m2 get-data-set-import-task --application-id \$M2_APPLICATION_ID --task-id \$M2_DATASET_IMPORT_TASK_ID\` ENV_FAILED=\$( jq '.summary.failed' <<< "\${ENV_RESPONSE}" ) ENV_SUCCEEDED=\$( jq '.summary.succeeded' <<< "\${ENV_RESPONSE}" ) echo Summary echo "Failed \$ENV_FAILED" echo "Succeeded \$ENV_SUCCEEDED" if [[ \$ENV_FAILED -ge 1 ]] then echo "At least one import failed" exit 1 fi return 0 elif [[ "\$ENV_STATUS" == "\"Failed\"" ]] then echo "Failed to \$TASK_TYPE catalog" exit 1 else sleep 15 counter=\$(( $counter + 1 )) if [[ \$counter -eq 20 ]] then echo "\$TASK_TYPE was not imported in 5 minutes" exit 1 fi fi done fi } EOF build: commands: - echo Import Data Files started `date` - echo "S3 Bucket for data files - $M2_DATA_STORE" - echo "Create MFI01V.MFIDEMO.BNKACC" - M2_DATASET_IMPORT_TASK_ID=$( aws m2 create-data-set-import-task --application-id "$M2_APPLICATION_ID" --import-config "{\"dataSets\":[{\"dataSet\":{\"storageType\":\"Database\",\"datasetName\":\"MFI01V.MFIDEMO.BNKACC\",\"relativePath\":\"DATA\",\"datasetOrg\":{\"vsam\":{\"format\":\"KS\",\"encoding\":\"A\",\"primaryKey\":{\"length\":9,\"offset\":5},\"alternateKeys\":[{\"length\":5,\"offset\":0,\"name\":\"Key1\"}]}},\"recordLength\":{\"min\":200,\"max\":200}},\"externalLocation\":{\"s3Location\":\"s3://$M2_DATA_STORE/catalog/data/MFI01V.MFIDEMO.BNKACC.DAT\"}}]}" --query taskId | sed 's/^"\(.*\)"$/\1/' ) - | . ./validation.lib check_import_status $M2_APPLICATION_ID $M2_DATASET_IMPORT_TASK_ID MFI01V.MFIDEMO.BNKACC - echo "Create MFI01V.MFIDEMO.BNKCUST" - M2_DATASET_IMPORT_TASK_ID=$( aws m2 create-data-set-import-task --application-id "$M2_APPLICATION_ID" --import-config "{\"dataSets\":[{\"dataSet\":{\"storageType\":\"Database\",\"datasetName\":\"MFI01V.MFIDEMO.BNKCUST\",\"relativePath\":\"DATA\",\"datasetOrg\":{\"vsam\":{\"format\":\"KS\",\"encoding\":\"A\",\"primaryKey\":{\"length\":5,\"offset\":0},\"alternateKeys\":[{\"length\":25,\"offset\":5,\"name\":\"Key1\"},{\"length\":25,\"offset\":30,\"name\":\"Key2\"}]}},\"recordLength\":{\"min\":250,\"max\":250}},\"externalLocation\":{\"s3Location\":\"s3://$M2_DATA_STORE/catalog/data/MFI01V.MFIDEMO.BNKCUST.DAT\"}}]}" --query taskId | sed 's/^"\(.*\)"$/\1/' ) - | . ./validation.lib check_import_status $M2_APPLICATION_ID $M2_DATASET_IMPORT_TASK_ID MFI01V.MFIDEMO.BNKCUST - echo "Create MFI01V.MFIDEMO.BNKATYPE" - M2_DATASET_IMPORT_TASK_ID=$( aws m2 create-data-set-import-task --application-id "$M2_APPLICATION_ID" --import-config "{\"dataSets\":[{\"dataSet\":{\"storageType\":\"Database\",\"datasetName\":\"MFI01V.MFIDEMO.BNKATYPE\",\"relativePath\":\"DATA\",\"datasetOrg\":{\"vsam\":{\"format\":\"KS\",\"encoding\":\"A\",\"primaryKey\":{\"length\":1,\"offset\":0}}},\"recordLength\":{\"min\":100,\"max\":100}},\"externalLocation\":{\"s3Location\":\"s3://$M2_DATA_STORE/catalog/data/MFI01V.MFIDEMO.BNKATYPE.DAT\"}}]}" --query taskId | sed 's/^"\(.*\)"$/\1/' ) - | . ./validation.lib check_import_status $M2_APPLICATION_ID $M2_DATASET_IMPORT_TASK_ID MFI01V.MFIDEMO.BNKATYPE - echo "Create MFI01V.MFIDEMO.BNKHELP" - M2_DATASET_IMPORT_TASK_ID=$( aws m2 create-data-set-import-task --application-id "$M2_APPLICATION_ID" --import-config "{\"dataSets\":[{\"dataSet\":{\"storageType\":\"Database\",\"datasetName\":\"MFI01V.MFIDEMO.BNKHELP\",\"relativePath\":\"DATA\",\"datasetOrg\":{\"vsam\":{\"format\":\"KS\",\"encoding\":\"A\",\"primaryKey\":{\"length\":8,\"offset\":0}}},\"recordLength\":{\"min\":83,\"max\":83}},\"externalLocation\":{\"s3Location\":\"s3://$M2_DATA_STORE/catalog/data/MFI01V.MFIDEMO.BNKHELP.DAT\"}}]}" --query taskId | sed 's/^"\(.*\)"$/\1/' ) - | . ./validation.lib check_import_status $M2_APPLICATION_ID $M2_DATASET_IMPORT_TASK_ID MFI01V.MFIDEMO.BNKHELP - echo "Create MFI01V.MFIDEMO.BNKTXN" - M2_DATASET_IMPORT_TASK_ID=$( aws m2 create-data-set-import-task --application-id "$M2_APPLICATION_ID" --import-config "{\"dataSets\":[{\"dataSet\":{\"storageType\":\"Database\",\"datasetName\":\"MFI01V.MFIDEMO.BNKTXN\",\"relativePath\":\"DATA\",\"datasetOrg\":{\"vsam\":{\"format\":\"KS\",\"encoding\":\"A\",\"primaryKey\":{\"length\":26,\"offset\":16},\"alternateKeys\":[{\"length\":35,\"offset\":7,\"name\":\"Key1\"}]}},\"recordLength\":{\"min\":400,\"max\":400}},\"externalLocation\":{\"s3Location\":\"s3://$M2_DATA_STORE/catalog/data/MFI01V.MFIDEMO.BNKTXN.DAT\"}}]}" --query taskId | sed 's/^"\(.*\)"$/\1/' ) - | . ./validation.lib check_import_status $M2_APPLICATION_ID $M2_DATASET_IMPORT_TASK_ID MFI01V.MFIDEMO.BNKTXN post_build: commands: - echo Import Data Files completed `date` Tags: - Key: Name Value: M2 Import Data Action M2StartApplication: Type: AWS::CodeBuild::Project Properties: TimeoutInMinutes: 15 Artifacts: Type: CODEPIPELINE Environment: ComputeType: BUILD_GENERAL1_SMALL Image: aws/codebuild/standard:5.0 Type: LINUX_CONTAINER EnvironmentVariables: - Name: RETAIN_ENVIRONMENT Type: PLAINTEXT Value: !Ref RetainEnvironment ServiceRole: !Ref M2StartAppRole LogsConfig: S3Logs: Location: !Sub ${S3LogsBucket.Arn}/CICDLogs Status: ENABLED Source: Type: CODEPIPELINE BuildSpec: | version: 0.2 env: shell: bash phases: install: runtime-versions: python: 3.7 pre_build: commands: - apt-get update - apt-get install jq - JCL_RESOURCE_ID=`grep JCL_RESOURCE_ID $CODEBUILD_SRC_DIR_M2StagingDeployOutput/build-variables.txt | cut -d '=' -f 2` - M2_APPLICATION_ID=`grep M2_APPLICATION_ID $CODEBUILD_SRC_DIR_M2StagingDeployOutput/build-variables.txt | cut -d '=' -f 2` - M2_DEPLOY_ID=`grep M2_DEPLOY_ID $CODEBUILD_SRC_DIR_M2StagingDeployOutput/build-variables.txt | cut -d '=' -f 2` - ARTIFACT_PATH=`grep ARTIFACT_PATH $CODEBUILD_SRC_DIR_M2StagingDeployOutput/build-variables.txt | cut -d '=' -f 2` - ARTIFACT_BUCKET=`grep ARTIFACT_BUCKET $CODEBUILD_SRC_DIR_M2StagingDeployOutput/build-variables.txt | cut -d '=' -f 2` - M2_LB_ENDPOINT=`grep M2_LB_ENDPOINT $CODEBUILD_SRC_DIR_M2StagingDeployOutput/build-variables.txt | cut -d '=' -f 2` - echo "Application ARN - $M2_APPLICATION_ID" - echo "JCL_RESOURCE_ID - $JCL_RESOURCE_ID" - echo "M2_LB_ENDPOINT - $M2_LB_ENDPOINT" build: commands: - echo "Starting application" - | counter=1 while [[ $counter -lt 20 ]] do APP_STATUS=`aws m2 get-application --application-id $M2_APPLICATION_ID --query status` echo "Status of the application: ${APP_STATUS}" if [[ "$APP_STATUS" == "\"Available\"" || "$APP_STATUS" == "\"Ready\"" ]] then M2_APPLICATION_VERSION=`aws m2 get-application --application-id $M2_APPLICATION_ID --query latestVersion.applicationVersion` echo "Application is available with version - $M2_APPLICATION_VERSION" deploy_counter=1 while [[ $deploy_counter -lt 20 ]] do DEPLOY_STATUS=`aws m2 get-deployment --application-id $M2_APPLICATION_ID --deployment-id $M2_DEPLOY_ID --query status` echo "Status of the deployment: ${DEPLOY_STATUS}" if [[ "$DEPLOY_STATUS" == "\"Succeeded\"" ]] then aws m2 start-application --application-id $M2_APPLICATION_ID app_run_counter=1 while [[ $app_run_counter -lt 20 ]] do APP_RUN_STATUS=`aws m2 get-application --application-id $M2_APPLICATION_ID --query status` echo "Status of the running application: ${APP_RUN_STATUS}" if [[ "$APP_RUN_STATUS" == "\"Running\"" ]] then echo "App started successfully" app_run_counter=42 else sleep 15 app_run_counter=$(( $app_run_counter + 1 )) if [[ $app_run_counter -eq 20 ]] then echo "Application failed to start in 5 minutes." exit 1 fi fi done deploy_counter=42 fi done counter=42 fi done post_build: on-failure: CONTINUE commands: - | if [[ "$RETAIN_ENVIRONMENT" == "false" && "$CODEBUILD_BUILD_SUCCEEDING" -eq 0 ]] then echo "Building is failing. Initating Cleanup..." if [[ ! -z "$M2_APPLICATION_ID" ]] then APP_STATUS=`aws m2 get-application --application-id $M2_APPLICATION_ID --query status` echo "Status of the application: $APP_STATUS" if [[ "$APP_STATUS" == "\"Starting\"" || "$APP_STATUS" == "\"Running\"" ]] then aws m2 stop-application --application-id $M2_APPLICATION_ID --query status counter=1 while [[ $counter -lt 20 ]] do APP_STATUS=`aws m2 get-application --application-id $M2_APPLICATION_ID --query status` echo "Status of the application: $APP_STATUS" if [[ "$APP_STATUS" == "\"Stopped\"" ]] then echo "Application with Id $M2_APPLICATION_ID has been stopped" counter=42 else sleep 15 counter=$(( $counter + 1 )) if [[ $counter -eq 20 ]] then echo "Application was not stopped in 5 minutes." exit 1 fi fi done fi echo "Deleting application with Id - $M2_APPLICATION_ID" aws m2 delete-application --application-id $M2_APPLICATION_ID --query status echo "Waiting for the application to be deleted" counter=1 while [[ $counter -lt 40 ]] do DELETED_APPLICATION=`aws m2 list-applications --query "applications[?applicationId=='$M2_APPLICATION_ID'].applicationId" --output text` if [[ "$DELETED_APPLICATION" == '' ]] then echo "Application with Id $M2_APPLICATION_ID has been deleted" echo "Deleting environment - $M2_ENVIRONMENT_ID" aws m2 delete-environment --environment-id "$M2_ENVIRONMENT_ID" counter=42 else echo "Waiting for the application with Id $DELETED_APPLICATION to be deleted" sleep 15 counter=$(( $counter + 1 )) if [[ $counter -eq 40 ]] then echo "Application was not deleted in 10 minutes." exit 1 fi fi done fi fi Tags: - Key: Name Value: M2 Start Application Action M2DeployProd: Type: AWS::CodeBuild::Project Properties: Artifacts: Type: CODEPIPELINE Environment: ComputeType: BUILD_GENERAL1_SMALL Image: aws/codebuild/standard:5.0 Type: LINUX_CONTAINER ServiceRole: !Ref M2DeployRole LogsConfig: S3Logs: Location: !Sub ${S3LogsBucket.Arn}/CICDLogs Status: ENABLED Source: Type: CODEPIPELINE # TODO BuildSpec: | version: 0.2 phases: install: runtime-versions: python: 3.7 pre_build: commands: - echo Installing source dependencies... build: commands: - echo Deploy started on `date` - echo Deploy completed on `date` Tags: - Key: Name Value: M2 Deploy Prod Phase CodePipelineServiceRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: '2012-10-17' Statement: Effect: Allow Principal: Service: codepipeline.amazonaws.com Action: sts:AssumeRole Policies: - PolicyName: CodePipelineServiceRolePolicy PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - codecommit:CancelUploadArchive - codecommit:GetBranch - codecommit:GetCommit - codecommit:GetUploadArchiveStatus - codecommit:UploadArchive Resource: !Sub arn:aws:codecommit:${AWS::Region}:${AWS::AccountId}:${RepositoryName} - Effect: Allow Action: - codebuild:BatchGetBuilds - codebuild:StartBuild Resource: - !GetAtt M2Build.Arn - !GetAtt M2DeployStaging.Arn - !GetAtt M2RunTests.Arn - !GetAtt M2DeployProd.Arn - !GetAtt M2ImportData.Arn - !GetAtt M2StartApplication.Arn - Effect: Allow Resource: - !Sub "${S3ArtifactRepository.Arn}" - !Sub "${S3ArtifactRepository.Arn}/*" Action: - s3:PutObject - s3:GetObject - s3:GetObjectVersion - s3:GetBucketAcl - s3:GetBucketLocation - Effect: Allow Action: - sns:Publish Resource: - !Ref PipelineNotificationSNSTopic - !Ref PipelineApprovalSNSTopic M2BuildRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: Effect: Allow Principal: Service: codebuild.amazonaws.com Action: sts:AssumeRole Policies: - PolicyName: M2BuildRolePolicy PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - logs:CreateLogGroup - logs:CreateLogStream - logs:PutLogEvents - logs:DeleteLogDelivery Resource: - !Sub "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/codebuild/*" - !Sub "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/codebuild/*:*" - Effect: Allow Action: - s3:PutObject - s3:GetObject - s3:GetObjectVersion - s3:GetBucketAcl - s3:GetBucketLocation Resource: - !Sub "arn:aws:s3:::codepipeline-${AWS::Region}-*" - !Sub "${S3ArtifactRepository.Arn}" - !Sub "${S3ArtifactRepository.Arn}/*" - !Sub "${S3LogsBucket.Arn}" - !Sub "${S3LogsBucket.Arn}/*" - Effect: Allow Action: - "ecr:BatchCheckLayerAvailability" - "ecr:GetDownloadUrlForLayer" - "ecr:BatchGetImage" Resource: - !Sub - arn:aws:ecr:${ecr_region}:${ecr_account}:repository/${ecr_repo} - ecr_region: !Select [ 3, !Split ['.', !Ref BuildImage]] ecr_account: !Select [ 0, !Split ['.', !Ref BuildImage]] ecr_repo: !Select [0, !Split [':', !Select [ 1, !Split ['/', !Ref BuildImage]]]] - Effect: Allow Action: - ecr:GetAuthorizationToken Resource: - "*" - Effect: Allow Action: - "s3:PutObject" Resource: - "arn:aws:s3:::aws-m2-repo-*/*" M2DeployRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: Effect: Allow Principal: Service: codebuild.amazonaws.com Action: sts:AssumeRole Policies: - PolicyName: M2DeployRolePolicy PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - s3:PutObject - s3:GetObject - s3:GetObjectVersion - s3:GetBucketAcl - s3:GetBucketLocation Resource: - !Sub "arn:aws:s3:::codepipeline-${AWS::Region}-*" - !Sub "${S3ArtifactRepository.Arn}" - !Sub "${S3ArtifactRepository.Arn}/*" - !Sub "${S3LogsBucket.Arn}" - !Sub "${S3LogsBucket.Arn}/*" - Effect: Allow Action: - m2:* - ec2:DescribeSubnets - ec2:DescribeSecurityGroups - ec2:DescribeNetworkInterfaces - ec2:ModifyNetworkInterfaceAttribute - ec2:CreateNetworkInterface - ec2:CreateNetworkInterfacePermission - ec2:DescribeVpcs - ec2:DescribeVpcAttribute - ec2:DeleteNetworkInterface - ec2:DescribeAccountAttributes - ec2:DescribeInternetGateways - ec2:DescribeNetworkInterfaces - elasticloadbalancing:CreateListener - elasticloadbalancing:CreateLoadBalancer - elasticloadbalancing:CreateTargetGroup - elasticloadbalancing:DeleteListener - elasticloadbalancing:DeleteLoadBalancer - elasticloadbalancing:DeleteTargetGroup - elasticloadbalancing:RegisterTargets - elasticloadbalancing:DeregisterTargets - s3:GetObject - s3:ListBucket - elasticfilesystem:DescribeMountTargets - fsx:DescribeFileSystems - logs:CreateLogGroup - logs:CreateLogStream - logs:PutLogEvents - logs:DescribeLogGroups - logs:CreateLogDelivery - logs:PutResourcePolicy - logs:UpdateLogDelivery - logs:DeleteLogDelivery - logs:DescribeResourcePolicies - logs:GetLogDelivery - logs:ListLogDeliveries - logs:DeleteLogDelivery - iam:CreateServiceLinkedRole Resource: - "*" M2ImportDataRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: Effect: Allow Principal: Service: codebuild.amazonaws.com Action: sts:AssumeRole Policies: - PolicyName: M2ImportDataRolePolicy PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - logs:CreateLogGroup - logs:CreateLogStream - logs:PutLogEvents - logs:DeleteLogDelivery Resource: - !Sub "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/codebuild/*" - !Sub "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/codebuild/*:*" - Effect: Allow Action: - s3:PutObject - s3:GetObject - s3:GetObjectVersion - s3:GetBucketAcl - s3:GetBucketLocation Resource: - !Sub "arn:aws:s3:::codepipeline-${AWS::Region}-*" - !Sub "${S3ArtifactRepository.Arn}" - !Sub "${S3ArtifactRepository.Arn}/*" - !Sub "${S3LogsBucket.Arn}" - !Sub "${S3LogsBucket.Arn}/*" - !Join - '' - - 'arn:aws:s3:::' - '{{resolve:ssm:m2cicd-data-bucket-' - !Sub ${ParameterStoreSuffix} - '}}' - !Join - '' - - 'arn:aws:s3:::' - '{{resolve:ssm:m2cicd-data-bucket-' - !Sub ${ParameterStoreSuffix} - '}}/*' - Effect: Allow Action: - m2:GetDataSetImportTask - m2:CreateDataSetImportTask - ec2:DescribeSubnets - ec2:DescribeSecurityGroups - ec2:DescribeNetworkInterfaces - ec2:ModifyNetworkInterfaceAttribute - ec2:CreateNetworkInterface - ec2:CreateNetworkInterfacePermission - ec2:DescribeVpcs - ec2:DescribeVpcAttribute - ec2:DeleteNetworkInterface - ec2:DescribeAccountAttributes - ec2:DescribeInternetGateways - elasticloadbalancing:CreateListener - elasticloadbalancing:CreateLoadBalancer - elasticloadbalancing:CreateTargetGroup - elasticloadbalancing:DeleteListener - elasticloadbalancing:DeleteLoadBalancer - elasticloadbalancing:DeleteTargetGroup - elasticloadbalancing:RegisterTargets - elasticloadbalancing:DeregisterTargets - s3:GetObject - s3:ListBucket - elasticfilesystem:DescribeMountTargets - fsx:DescribeFileSystems Resource: - "*" M2StartAppRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: Effect: Allow Principal: Service: codebuild.amazonaws.com Action: sts:AssumeRole Policies: - PolicyName: M2StartAppRolePolicy PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - logs:CreateLogGroup - logs:CreateLogStream - logs:PutLogEvents - logs:DeleteLogDelivery Resource: - !Sub "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/codebuild/*" - !Sub "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/codebuild/*:*" - Effect: Allow Action: - s3:PutObject - s3:GetObject - s3:GetObjectVersion - s3:GetBucketAcl - s3:GetBucketLocation Resource: - !Sub "arn:aws:s3:::codepipeline-${AWS::Region}-*" - !Sub "${S3ArtifactRepository.Arn}" - !Sub "${S3ArtifactRepository.Arn}/*" - !Sub "${S3LogsBucket.Arn}" - !Sub "${S3LogsBucket.Arn}/*" - Effect: Allow Action: - m2:* Resource: - "*" M2TestRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: Effect: Allow Principal: Service: codebuild.amazonaws.com Action: sts:AssumeRole Policies: - PolicyName: M2TestRolePolicy PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - logs:CreateLogGroup - logs:CreateLogStream - logs:PutLogEvents - logs:DeleteLogDelivery Resource: - !Sub "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/codebuild/*" - !Sub "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/codebuild/*:*" - Effect: Allow Action: - ec2:CreateNetworkInterfacePermission Resource: - '*' Condition: StringEquals: ec2:AuthorizedService: "codebuild.amazonaws.com" ArnEquals: ec2:Subnet: - !Sub "arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:subnet/*" - Effect: Allow Action: - s3:PutObject - s3:GetObject - s3:GetObjectVersion - s3:GetBucketAcl - s3:GetBucketLocation Resource: - !Sub "arn:aws:s3:::codepipeline-${AWS::Region}-*" - !Sub "${S3ArtifactRepository.Arn}" - !Sub "${S3ArtifactRepository.Arn}/*" - !Sub "${S3LogsBucket.Arn}" - !Sub "${S3LogsBucket.Arn}/*" - Effect: Allow Action: - m2:StartBatchJob - m2:GetBatchJobExecution - m2:DeleteEnvironment - m2:StopApplication - m2:GetApplication - m2:DeleteApplication - m2:ListApplications - s3:GetObject - s3:ListBucket - ec2:DescribeAvailabilityZones - ec2:DescribeNetworkInterfaces - ec2:DescribeSecurityGroups - ec2:DescribeSubnets - ec2:DescribeVpcs - ec2:DeleteNetworkInterface - ec2:CreateNetworkInterface - ec2:DetachNetworkInterface - ec2:DeleteNetworkInterface - ec2:AttachNetworkInterface - ec2:DescribeDhcpOptions Resource: - "*" CleanupLambdaRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: Service: - lambda.amazonaws.com Action: - sts:AssumeRole Path: / Policies: - PolicyName: LambdaRolePolicy PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - s3:DeleteObject - s3:DeleteObjectVersion Resource: - !Sub - arn:aws:s3:::m2-artifacts-${StackString}-${AWS::AccountId}-${AWS::Region}/* - { StackString: !Select [0, !Split ['-', !Select [ 2, !Split ['/', !Ref 'AWS::StackId']]]] } - !Sub - arn:aws:s3:::m2-logs-${StackString}-${AWS::AccountId}-${AWS::Region}/* - { StackString: !Select [0, !Split ['-', !Select [ 2, !Split ['/', !Ref 'AWS::StackId']]]] } - Effect: Allow Action: - s3:ListBucket - s3:ListBucketVersions Resource: - !Sub - arn:aws:s3:::m2-artifacts-${StackString}-${AWS::AccountId}-${AWS::Region} - { StackString: !Select [0, !Split ['-', !Select [ 2, !Split ['/', !Ref 'AWS::StackId']]]] } - !Sub - arn:aws:s3:::m2-logs-${StackString}-${AWS::AccountId}-${AWS::Region} - { StackString: !Select [0, !Split ['-', !Select [ 2, !Split ['/', !Ref 'AWS::StackId']]]] } - Effect: Allow Action: logs:CreateLogGroup Resource: !Sub arn:aws:logs:${AWS::Region}:${AWS::AccountId}:* - Effect: Allow Action: - logs:CreateLogStream - logs:PutLogEvents Resource: !Sub arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:* S3BucketHandler: Type: AWS::Lambda::Function Properties: Handler: index.handler Role: !GetAtt CleanupLambdaRole.Arn Code: ZipFile: | import os import json import cfnresponse import boto3 from botocore.exceptions import ClientError s3 = boto3.resource('s3') def handler(event, context): print("Received event: %s" % json.dumps(event)) s3_bucket = s3.Bucket(event['ResourceProperties']['Bucket']) try: if event['RequestType'] == 'Create' or event['RequestType'] == 'Update': result = cfnresponse.SUCCESS elif event['RequestType'] == 'Delete': s3_bucket.object_versions.delete() result = cfnresponse.SUCCESS except ClientError as e: print('Error: %s', e) result = cfnresponse.FAILED cfnresponse.send(event, context, result, {}) Runtime: python3.9 Timeout: 300 CleanupS3ArtifactBucket: Type: "Custom::EmptyS3Bucket" Properties: ServiceToken: !GetAtt S3BucketHandler.Arn Bucket: !Ref S3ArtifactRepository CleanupS3LogsBucket: Type: "Custom::EmptyS3Bucket" Properties: ServiceToken: !GetAtt S3BucketHandler.Arn Bucket: !Ref S3LogsBucket CfnCloudWatchEventRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: Service: - events.amazonaws.com Action: sts:AssumeRole Path: / Policies: - PolicyName: cwe-pipeline-execution PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Action: codepipeline:StartPipelineExecution Resource: !Sub arn:aws:codepipeline:${AWS::Region}:${AWS::AccountId}:${M2Pipeline} TriggerPipelineEventRule: Type: AWS::Events::Rule Properties: Description: Rule to automatically start the pipeline when a change occurs in the repository and branch. EventPattern: source: - aws.codecommit detail-type: - 'CodeCommit Repository State Change' resources: - !Sub arn:aws:codecommit:${AWS::Region}:${AWS::AccountId}:${RepositoryName} detail: event: - referenceCreated - referenceUpdated referenceType: - branch referenceName: - !Ref RepositoryBranch Targets: - Arn: !Sub arn:aws:codepipeline:${AWS::Region}:${AWS::AccountId}:${M2Pipeline} RoleArn: !GetAtt CfnCloudWatchEventRole.Arn Id: codepipeline-M2Pipeline M2KmsKey: Type: 'AWS::KMS::Key' UpdateReplacePolicy: Retain Properties: Description: Symmetric Key for M2 Service EnableKeyRotation: true KeyPolicy: Version: 2012-10-17 Id: m2-key-default Statement: - Sid: Enable IAM User Permissions Effect: Allow Principal: AWS: !Sub arn:aws:iam::${AWS::AccountId}:root Action: kms:* Resource: "*" - Effect: Allow Principal: Service: m2.amazonaws.com Action: kms:Decrypt Resource: "*" M2DatabaseSecret: Type: AWS::SecretsManager::Secret Properties: Description: M2 Database Credentials KmsKeyId: !GetAtt M2KmsKey.Arn GenerateSecretString: SecretStringTemplate: '{"username": "dbadmin"}' GenerateStringKey: password PasswordLength: 16 ExcludePunctuation: true M2DatabaseSecretResourcePolicy: Type: AWS::SecretsManager::ResourcePolicy Properties: SecretId: !Ref M2DatabaseSecret ResourcePolicy: Version: "2012-10-17" Statement: - Effect: Allow Principal: Service: "m2.amazonaws.com" Action: "secretsmanager:GetSecretValue" Resource: "*" M2DBClusterParameterGroup: Type: 'AWS::RDS::DBClusterParameterGroup' Properties: Description: CloudFormation Sample Aurora Cluster Parameter Group Family: aurora-postgresql10 Parameters: max_prepared_transactions: 100 M2DBSubnetGroup: Type: AWS::RDS::DBSubnetGroup Properties: DBSubnetGroupDescription: "DB subnet group for M2 deployment" SubnetIds: - !Join - '' - - '{{resolve:ssm:m2cicd-subnet1-' - !Sub ${ParameterStoreSuffix} - '}}' - !Join - '' - - '{{resolve:ssm:m2cicd-subnet2-' - !Sub ${ParameterStoreSuffix} - '}}' M2DBCluster: Type: 'AWS::RDS::DBCluster' Properties: DBClusterParameterGroupName: !Ref M2DBClusterParameterGroup MasterUsername: !Sub "{{resolve:secretsmanager:${M2DatabaseSecret}::username}}" MasterUserPassword: !Sub "{{resolve:secretsmanager:${M2DatabaseSecret}::password}}" Engine: aurora-postgresql EngineVersion: 10.14 EngineMode: serverless DBSubnetGroupName: !Ref M2DBSubnetGroup ScalingConfiguration: AutoPause: true MinCapacity: 2 MaxCapacity: 8 SecondsUntilAutoPause: 900 VpcSecurityGroupIds: - !GetAtt M2DatabaseSecurityGroup.GroupId M2SecretDBClusterAttachment: Type: "AWS::SecretsManager::SecretTargetAttachment" Properties: SecretId: !Ref M2DatabaseSecret TargetId: !Sub arn:aws:rds:${AWS::Region}:${AWS::AccountId}:cluster:${M2DBCluster} TargetType: AWS::RDS::DBCluster SNSTopicSubscriptionRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Statement: - Action: sts:AssumeRole Effect: Allow Principal: Service: - lambda.amazonaws.com Version: '2012-10-17' Path: / Policies: - PolicyName: root PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: - logs:CreateLogGroup - logs:DescribeLogGroups - logs:CreateLogStream - logs:DescribeLogStreams - logs:PutLogEvents - logs:DeleteLogDelivery Resource: - !Sub 'arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:*' Sid: LogAccessPolicy - Effect: Allow Action: - sns:Unsubscribe - sns:Subscribe - sns:ListSubscriptionsByTopic Resource: '*' SNSTopicSubscriptionLambda: Type: AWS::Lambda::Function Properties: Handler: index.handler Role: !GetAtt SNSTopicSubscriptionRole.Arn Code: ZipFile: | import json import boto3 import traceback import cfnresponse def subscribe_endpoint(client, topic_arn, endpoint, protocol): 'Subscribe to SNS endpoint' response_data = client.subscribe( TopicArn=topic_arn, Protocol=protocol, Endpoint=endpoint, ReturnSubscriptionArn=True ) if response_data['ResponseMetadata']['HTTPStatusCode'] == 200: print(f'Endpoint {endpoint} subscribed to topic {topic_arn}.') else: raise Exception(f'Failed when subscribing {endpoint} to topic {topic_arn}.') def adjust_subscriptions(event, client): 'Adjust subscription' resource_properties = event['ResourceProperties'] topic_arn = resource_properties['TopicArn'] subscription_protocol = resource_properties['SubscriptionProtocol'] subscription_endpoints = resource_properties['SubscriptionEndpoints'] old_resource_properties = event['OldResourceProperties'] \ if 'OldResourceProperties' in event else None old_subscription_endpoints = old_resource_properties['SubscriptionEndpoints'] \ if old_resource_properties else None subscriptions_list = client.list_subscriptions_by_topic(TopicArn=topic_arn) existing_subscriptions = [] for subscription in subscriptions_list['Subscriptions']: if subscription['Protocol'] == 'email': existing_subscriptions.append(subscription['Endpoint']) if subscription_endpoints: for endpoint in subscription_endpoints: if endpoint not in existing_subscriptions or old_subscription_endpoints is None \ or endpoint not in old_subscription_endpoints: subscribe_endpoint(client, topic_arn, endpoint, subscription_protocol) def handler(event, context): 'Lambda entry point and handler' request_type = event['RequestType'] sns_client = boto3.client('sns') print(event) print(context) try: if request_type == 'Create' or request_type == 'Update': adjust_subscriptions(event, sns_client) message = None if request_type == 'Create': message = { "Message": "Created" } elif request_type == 'Update': message = { "Message": "Updated" } cfnresponse.send(event, context, "SUCCESS", message) else: cfnresponse.send(event, context, "SUCCESS", {"Message": "Function Not Applicable"}) except Exception as c_e: print(c_e) traceback.print_tb(c_e.__traceback__) cfnresponse.send(event, context, "FAILED", { "Message": "Exception" } ) Runtime: python3.9 Timeout: 300 SubscribeAdminEmailAddress: Type: Custom::SNSSubscription Properties: ServiceToken: !GetAtt SNSTopicSubscriptionLambda.Arn TopicArn: !Ref PipelineNotificationSNSTopic SubscriptionEndpoints: !Ref AdminEmailAddress SubscriptionProtocol: 'email' SubscribeApprovalEmailAddress: Type: Custom::SNSSubscription Properties: ServiceToken: !GetAtt SNSTopicSubscriptionLambda.Arn TopicArn: !Ref PipelineApprovalSNSTopic SubscriptionEndpoints: !Ref ApprovalEmailAddress SubscriptionProtocol: 'email' Outputs: CodeCommitRepo: Description: HTTPS endpoint to clone the CodeCommit repository Value: !Sub https://git-codecommit.${AWS::Region}.amazonaws.com/v1/repos/${RepositoryName} PipelineURL: Description: URL to access the pipeline on AWS Management Console Value: !Sub https://${AWS::Region}.console.aws.amazon.com/codesuite/codepipeline/pipelines/${M2Pipeline}/view?region=${AWS::Region} DatabaseDetailsURL: Description: URL to AWS Secrets Manager secret which contains M2 Database details Value: !Sub https://${AWS::Region}.console.aws.amazon.com/secretsmanager/home?region=${AWS::Region}#!/secret?name=${M2DatabaseSecret}