is this braintree testing multi purchase error som

2019-08-15 13:52发布

问题:

I'm trying to figure out how to test with braintree, and I'm running into what feels like a bandwidth error.

response = ::Braintree::Customer.create(payment_method_nonce: Braintree::Test::Nonce::Transactable)
token = response.customer.credit_card.first.token
#so far so good

response = ::Braintree::Transaction.sale(payment_method_token: token, amount: "1.00")
#still good

response = ::Braintree::Transaction.sale(payment_method_token: token, amount: "1.00")
#response is failure
# => Braintree::ErrorResult ...   status: "gateway_rejected"

All that takes place without a pause.
If I wait a bit and run the sale line again it works again..

This of course sets up a problem with test scripts. I can moc-out the actual connection to BT, but I'm slightly worried about this. Should I be?

回答1:

I work at Braintree. If you have more questions, you can always get in touch with our support team.

You can see what gateway_rejected means on the transaction statuses page of the API docs:

Gateway rejected

The gateway rejected the transaction because AVS, CVV, duplicate or fraud checks failed.

Transactions also have a gateway rejection reason, which in this case will be duplicate.

You can find more information about duplicate checking settings in the control panel docs:

Configure duplicate transaction checking

Duplicate transaction checking is enabled by default with a 30-second window in both the sandbox and production environments. These settings can be updated or disabled by users with Account Admin privileges.

  1. Log into the Control Panel
  2. Navigate to Settings > Processing > Duplicate Transaction Checking
  3. Click Edit to adjust the time window or Enable/Disable to turn the feature on/off


回答2:

Looks like it may be a rate-limit error. Search their help/docs/site about information related to rate limiting so you can know what the limits are and work around them.

However...if you're talking about testing as in automated tests - I would recommend not using external services in your test suite, and mocking out everything. Ideally you want your test suite to be able to run even when the network connection is down and you don't want it slowing down when 3rd party services are slow or when your network is slow.

If you really want to do a full integration test with all your 3rd party services, you can create a special set of tests that do that that are annotated with something like "@external", and then schedule them to run once a week or something just to flag some weird changes or errors.