There has been quite a bit of concern noted relating to the recent discovery that Lenovo are pre-installing a piece of Adware (Superfish) which has the capability of intercepting SSL traffic from machines on which it is installed.
What are the security risks of having OEMs or other companies installing this kind of software onto customers systems?
Referring to this PCI DSS 2.0 and ssh keys I want to ask
Is this PCI section 8.2 only for individual users or also valid if a user connects via ssh to a root account?
I prefer to ssh root@localhost instead of sudo, because you once unlock the key in ssh-agent and then can do passwordless actions on multiple servers.
With sudo you’ll need to type your password on all servers, again and again (and it’s the same, because ldap) frustrating at such a level that the password will end up in an expect script.
I want to know technical details about how public PGP keyservers synchronize the keys.
If I send my key to one keyserver, how exactly does it “travel” to all the other ones? Who sends it to who and how? What would I need to know it I wanted to write my own public key server software from scratch?
I am trying to look for either some protocol description or even the actual code that takes care of this, but I cannot find it.
One of the examples I had in mind were keyservers like http://pgp.mit.edu and http://pgp.zdv.uni-mainz.de. As far as I know, if I upload a key to the first one, it gets somehow “magically” transfered to the other one, and then dozens of other keyservers publicly found on the internet.
I am asking for concrete standards how those keyservers exchange the submitted keys, and if there is not one concrete strategy, then what strategies are usually used.
I am the system administrator of my local computer (Windows 7) and I am unable to write to an external hard drive if it is not encrypted using Bitlocker to go. The local computer is not encrypted and I have searched for a way to turn off the need for encryption, but have been unsuccessful in finding an answer. Any help would be appreciated.
If I have 2 websites,
b.example.com. Those sites use TLS and SNI, and require client authentication with certificates created with my own CA.
The use case is that some clients have access to
*.example.com, other clients have access only to
Is it possible to create a certificate so that authentication is valid only when used on site
a.example.com, and rejected when trying to connect to
Or would I have to create a second CA for this purpose?
In this case, is it possible to have
a.example.com validate certificates against either one of those 2 CAs? (if yes the answer might be specific to nginx? or doable if one CA is a subordinate of the other?)
I develop a lot of internal applications/websites for my work using our standard SSO. It uses SAML2 and has a few different levels of security available.
We have vendor who is using it for one of our internal CMSs. Basically you have to sign into SSO every day to access the site. You are just required to meet SSO authentication once per browser for almost all of the internal sites (the highest level ones require new authentication every time). The vendor/CMS is not at this level though, it just has basic SSO authentication.
So I make some config changes to vendor site and propagate them over the weekend. I need to check the changes on a bunch of different browsers. FF, Chrome, IE8, IE11, Safari, Android, Blackberry, Iphone…
I had to log into SSO each time… Until I try my iphone. I go to website and no redirect to SSO, I just get right in. I hadn’t logged into the site on iphone in over a week. How could the iphone be passing back an authenticated cookie to the SSO?
So I am trying to figure out what the packet payload is off of a possible TLS HeartBleed alert from my IDS. I have read that Wireshark is able to do it with some certain keys, but isn’t that in real time (ish). Would I be able to put that capture into wireshark if I can use the certificate from my server?
I believe it would be the .cer one right?
After months of logging, I have collated a list of rogue IP addresses trying to perform SSH brute-force logins, send SPAM, hack WordPress admin logins, upload spammy links, etc:
So far nothing untoward has happened to my server. I was wondering, what should I do with this blacklist. I have over 1500 unique IP addresses distributed across 1000 different
/24 subnets. Blocking all of them would introduce additional workload on my server.
Is there any value for an IP addresses blacklist?
Much better quality larger version here
I am trying to get this ‘pure white’ skin effect. Is this just playing with over-exposure or is it more post-processing work?
The above image is not mine. I found it on the web. It is an example of what I am trying to achieve.
EDIT: just for clarification in case anyone eles has the same question – the idea/effect I am looking for is called ‘high-key’….yes I am a complete noob
Currently I’m playing with the LiveView and movie making mode of my D3200 and I’m stuck with a few technical issues.
I can change the shutter speed and the aperture during filming and it has an impact on the brightness of the clip.
But does changing the shutter speed make sense? I mean when you’re recording a movie, the lens should be open all the time and when I change aperture it should have an effect on the DoF – but it changes the brightness but not the DoF.