Locked myself out of SSH with UFW in EC2 AWS

2020-02-16 08:54发布

I have an EC2 Instance with Ubuntu. I used sudo ufw enable and after only allow the mongodb port

sudo ufw allow 27017

When the ssh connection broke, I can´t reconnect

5条回答
做自己的国王
2楼-- · 2020-02-16 09:21

I have the same problem and found out that this steps works:

1- Stop your instance

2- Go to `Instance Settings -> View/Change user Data

3- Paste this

Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
ufw disable
iptables -L
iptables -F
--//

4- Start your instance

Hope it works for you

查看更多
我命由我不由天
3楼-- · 2020-02-16 09:21

Other approaches didn't work for me. My EC2 instance is based on Bitnami image. Attaching volume to another instance didn't work because of marketplace locks.

So instead stop the problem instance and paste this script in instanceSettings > view-change user data.

This approach do not require detaching the volume so it's more straight forward as compared to other ones.


Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
ufw disable
iptables -L
iptables -F
--//

Must stop instance before pasting this, after this start your instance and you should be able to ssh.

查看更多
疯言疯语
4楼-- · 2020-02-16 09:34

I know this is an old question but I fixed mine by adding a command in View/Change User Data using bootcmd

I first stopped my instance

Then I added this in User Data

#cloud-config
bootcmd:
 - cloud-init-per always fix_broken_ufw_1 sh -xc "/usr/sbin/service ufw stop >> /var/tmp/svc_$INSTANCE_ID 2>&1 || true" 
 - cloud-init-per always fix_broken_ufw_2 sh -xc "/usr/sbin/ufw disable>> /var/tmp/ufw_$INSTANCE_ID 2>&1 || true"

#Note: My instance is Ubuntu

查看更多
成全新的幸福
5楼-- · 2020-02-16 09:39

Somehow Mahesh's answer didn't work for me, if you happen to be in my shoes, try this.

  • Launch a new instance (recovery instance).
  • Stop the original instance (DO NOT TERMINATE)
  • Detach the volume (problem volume) from the original instance
  • Attached it to the recovery instance as /dev/sdf.
  • Login to the recovery instance via ssh/putty
  • Run sudo lsblk to display attached volumes and confirm the name of the problem volume. It usually begins with /dev/xvdf. Mine is /dev/xvdf1
  • Mount problem volume.

    $ sudo mount /dev/xvdf1 /mnt
    $ cd /mnt/etc/ufw
    
  • Open ufw configuration file

    $ sudo vim ufw.conf
    
  • Press i to edit the file.
  • Change ENABLED=yes to ENABLED=no
  • Type Ctrl-C and type :wq to save the file.
  • Display content of ufw conf file using the command below and ensure that ENABLED=yes has been changed to ENABLED=no

    $ sudo cat ufw.conf 
    
  • Unmount volume

    $ cd ~
    $ sudo umount /mnt
    
  • Detach problem volume from recovery instance and re-attach it to the original instance as /dev/sda1.

  • Start the original instance and you should be able to log back in.

Source: here

查看更多
姐就是有狂的资本
6楼-- · 2020-02-16 09:44
  • Launch another EC2 server instance The best way to accomplish this is use EC2’s “Launch More Like This” feature. This will ensure that the OS type, security group and other attributes are the same thus saving a bit of setup time.
  • Stop the problem instance
  • Detach volume from problem instance
  • Attach volume to new instance

Note: Newer Linux kernels may rename your devices to /dev/xvdf through /dev/xvdp internally, even when the device name entered is /dev/sdf through /dev/sdp.

  • Mount the volume
cd ~
mkdir lnx1
sudo mount /dev/xvdf ./lnx1
  • Disable UFW
cd lnx1
sudo vim ufw.conf

Now find ENABLED=yes and change it to ENABLED=no.

  • Detach volume

Be sure to unmount the volume first:

sudo umount ./lnx1/
  • Reattach the volume to /dev/sda1 on our problem instance
  • Boot problem instance
  • Reassign elastic IP address if necessary
  • Delete the temporary instance and its associated volume

Hola !! you are good go.

查看更多
登录 后发表回答